Tuesday, January 25, 2022

Install OpenCv in Linux/Ubuntu for Java Application.


1. Introduction:

In this blog post, we are going to install and set up OpenCV in ubuntu os for the java applications. OpenCV is a widely used great computer vision library. You can learn more about the OpenCV tutorial from here. For documentation, you can follow here. We will also cover some tutorials for Java bindings too.
 



2. Download OpenCV:

You can download OpenCV from the public Github repository of OpenCV or from their official website from here.  Select the desired version and click "Sources", which will download the zip file. Unzip the file.
unzip opencv-4.3.0.zip

 
3. Build OpenCV:

In order to build OpenCV, go to the OpenCV path in my case it's under "/opt/opencv-4.3.0" so I will use the same.

Create a directory to build.
mkdir build
  cd build
Now, if you don't have installed cmake software please install it using the below command
sudo apt-get  install cmake
Next, is to generate and configure with cmake for building executables in our system.
cmake -DBUILD_SHARED_LIBS=OFF ..
Note: When OpenCV is built as a set of static libraries (-DBUILD_SHARED_LIBS=OFF option) the Java bindings dynamic library is all-sufficient, i.e. doesn’t depend on other OpenCV libs, but includes all the OpenCV code inside. 

Make sure the output of the above command looks like below:


If it doesn't find the ant and java then you may get the following output: 
Java:
   ant:                         NO
   JNI:                         NO
   Java tests:                 YES
For this, install and setup your java and install ant
sudo apt install openjdk-8-jdk
sudo apt-get install ant

If you are still getting ant as NO then try the following command to install ant
sudo snap install ant --classic
Now start the build
make -j4
Note: Be careful not to run out of memory during the build. We need 4GB of memory per core. For example, if we compile with 4 cores (e.g. make -j4) we need a machine with at least 16GB of RAM.

The output look likes this and it will take some time.


If everything is fine, you successfully build OpenCV. Make sure the following files are packaged in the corresponding directory.
/opt/opencv-4.3.0/build/lib/libopencv_java430.so
/opt/opencv-4.3.0/build/bin/opencv-430.jar
The path of those files is created according to your OpenCV version and directory. You need to make sure the so and jar file must be created. This jar file contains the java wrapper code which we will used in the sample example.

 



4. Run Sample Example:

Now we are going to add the compiled jar file in our project library.

For IntelliJ Idea:

Go to : File >> Project Structure >> Libraries (under project settings)

you can see the + icon at the top left, to add a new project library click on it, and select java and add the path of previously created jar file i.e opencv-430.jar. 

It's time to run a sample test example.
import org.opencv.core.CvType;
import org.opencv.core.Mat;

public class SampleTest {

    public static void main(String[] args) {
        System.load("/opt/opencv-4.3.0/build/lib/libopencv_java430.so");
        Mat mat = Mat.eye(3, 3, CvType.CV_8UC1);
        System.out.println("mat = " + mat.dump());
    }
}
Make sure that you loaded your corresponding .so file.

Output:
mat = [  1,   0,   0;
   0,   1,   0;
   0,   0,   1]
For those who are running OpenCV in an existing project, you can set up with Gradle project as below:

For Gradle:

Copy the jar file in your project directory package for e.g "libs" and add following inside dependencies in build.gradle file.
dependencies {
//other dependencies

compile fileTree(dir: 'libs', include: '*.jar')
}
Share:

How to Install and configure free SSL/TLS certificate for Tomcat using Let's Encrypt on Ubuntu.

How to set up a free SSL certificate for tomcat using let's encrypt on ubuntu.

1. Introduction:



Here, we are going to set up a free SSL certificate provided by a non-profit authority called Let's Encrypt. This is trusted and used by many to secure their website. The certificate is valid for only 90 days and can renew during that time. You can find out more about Let's Encrypt here

2. Prerequisites:

  • Running ubuntu server
  • Running tomcat server
  • Domain name pointed to the server Ip address

3. Install certbort and create an SSL certificate:
SSH into the server where you want to create a certificate. In order to create an SSL certificate, we need to install certbot for this, go and select the appropriate ubuntu server version from here. As we are using ubuntu 18.04 LTS.


which will give the following command to install certbot.

Add Certbot PPA
 
 sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository universe
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
Install Certbot
sudo apt-get install certbot
If you have already running service which uses port 80, stop it first otherwise you will get Address BindException.

To obtain an SSL certificate for your domain using a built-in "standalone" webserver type the following command:
sudo certbot certonly --standalone -d example.com
Here, replace the domain name you want to secure instead of example.com 

which will create a different certificate file to the directory:   /etc/letsencrypt/live/example.com/

Now, logged in as root user and go to that directory
sudo -i
cd /etc/letsencrypt/live/example.com/

Next step is to convert those certificate PEM file to password-based PFX format so that we can use in tomcat configuration. We can do this by using OpenSSL command as below.
openssl pkcs12 -export -out bundle.pfx -inkey privkey.pem -in cert.pem -certfile chain.pem -password pass:password
Replace the password with your desired one. It will create a password-protected file bundle.pfx under the same directory "/etc/letsencrypt/live/example.com/" which we need to use in tomcat configuration.





 
4. Tomcat configuration for HTTPs:

Go to your tomcat directory, and backup the server.xml file; as we are going to change the file. It's always a good approach to backup the config file before changing it.
cp conf/server.xml conf/server-copy.xml
 
Edit the server.xml file.
sudo vi conf/server.xml  // no need to type sudo if you are logged in as root user
  
You can see the following xml tag(for tomcat 8), we are going to change this: 



Replace the above tag such that the config look like as below: 


Here, we are changing port 8443 to 443, keystoreType as "PKCS12", keystoreFile as the path of the pfx file created previously and keystorePass as your password that we used while creating PFX file. 

Change the port 8080 to 80: 

Under server.xml you can find the following tag.


change the above xml tag as below:  


Here, we are changing the port from 8080 to 80 and 8443 to 443. By doing so, if your domain running with port 8080 i.e example.com:8080, now it will open with port 80 i.e example.com. If you type your domain in the browser then you can run it with both HTTP and https i.e http://example.com and https://example.com.

Save the server.xml file by clicking "Esc" key and type ":wq!" and hit Enter. 

As we want to always redirect our domain to https. To do so, open the web.xml file under conf/web.xml.
sudo vi conf/web.xml
  
And add the below code at the end of file before the end of "/web-app" xml tag.
<security-constraint>
  <web-resource-collection>
  <web-resource-name>Entire Application</web-resource-name>
   <url-pattern>/*</url-pattern>
 </web-resource-collection>
  <!--auth-constraint goes here if you requre authentication-->
 <user-data-constraint>
 <transport-guarantee>CONFIDENTIAL</transport-guarantee>
 </user-data-constraint>
 </security-constraint>

Which will always redirect to HTTPs.

 
5. Renew certificate:

The certificate is valid for only 90 days so we need to renew before expiry. For this, stop tomcat and type the following command:
sudo certbot renew
 
sudo -i
cd /etc/letsencrypt/live/example.com/
openssl pkcs12 -export -out bundle.pfx -inkey privkey.pem -in cert.pem -certfile chain.pem -password pass:password
 
Don't forget to use your existing password. And restart the tomcat server.

Share:

Tunnel local server to the public internet with https using Ngrok.

How to tunnel our local server to the public internet with https using Ngrok.


1. Introduction:

It seems to be daunting if we need to test our application locally over Https. Generally, when we need to listen webhook from any other service provider, testing applications locally will be difficult. In this tutorial, I will show how to tunnel the local server into the live one using ngrok.

Create an account from ngrok from which we can get tunnel auth token which we will use to get persistent Https URLs even if the outage of the internet until ngrok is running.


2. Install and set up on windows system:




Download ngrok for windows system here and unzip. After that, you will see the ngrok binary file, to run that file double click it. Make sure the path of ngrok, you will see the command prompt as bellow.

Now you are ready to tunnel your local server port. Simply execute the following command.
ngrok.exe http 80
Here, port 80 is the server port running; you need to use the respective port of your running local server like 8080, 8090 etc.

We have successfully tunnel our local server running at port 80. We can see the two forwarding URLs which are accessible publicly. As we can see the session expires in about 8 hrs which means those URLs will be changes every 8 hrs; here we need an auth token to prevent this. Execute the following command.
ngrok.exe authtoken 5AXFH2DjGMu9NFHntjxZf_73wZyVZiKeCxrA1hwZWqX
Use your own auth token from your account instead. You can see the message in the command prompt.
Authtoken saved to configuration file: C:\Users\yourUser/.ngrok2/ngrok.yml
We will use this file to set up for multiple port tunnels. Now again run your local server port you will see the final tunneling URLs without session expires limit.

Now its time to tunnel more than one port if you need it. For this, we are using ngrok.yml config file as below:


ngrok.yml
tunnels:

 backend:

  addr: 8080
  proto: http
  
 frontend:

  addr: 8081
  proto: http
Here, I have running two applications on port 8080 and 8081. Now again run the ngrok.exe file and run the following command to start and tunnel all ports.
ngrok.exe start -all



We have successfully tunnel a multi-port server.

Note: if you get Invalid Host header error while running that URLs, start tunneling port with the following command instead:
ngrok.exe http 8080 -host-header="localhost:8080"
or
ngrok.exe http --host-header=rewrite 8080
For multiport tunnels
tunnels:

 backend:

  addr: 8080
  proto: http
  host_header: "localhost:8080"
  
 frontend:

  addr: 8081
  proto: http
  host_header: "localhost:8081"




3. Install and set up on Linux system:

Download ngrok for the Linux system here. Go to download path and simply right click and extract the file. You will see the ngrok binary file, we need to make it executable so, copy this file to usr/bin directory. 
sudo cp ~/Downloads/ngrok-stable-linux-amd64/ngrok /usr/bin/
Now type ngrok command in the terminal you can see the following output if it is successfully installed.


If it is not installed then go to the ngrok binary file path and open terminal and type command ./ngrok instead of ngrok to run and tunnel your server port.

For the persistent URLs, setup auth token using command:
ngrok authtoken 5AXFH2DjGMu9NFHntjxZf_73wZyVZiKeCxrA1hwZWqX
Use your auth token from your ngrok account, which will configure ngrok config file in your home directory.
Authtoken saved to configuration file: /home/36olearntocode/.ngrok2/ngrok.yml
If you want to tunnel single port then simply run following command and you will see the following output.
ngrok http 80


Use the respective port of your server.  

If you want to tunnel multiple ports then set up ngrok config file as:





ngrok.yml
tunnels:

 backend:

  addr: 8080
  proto: http
  
 frontend:

  addr: 8081
  proto: http
Now again run the command to start:
ngrok start -all

Output:



Note: if you get Invalid Host header error while running that URLs, start tunneling port with the following command instead:
ngrok http 8080 -host-header="localhost:8080"
or
ngrok http --host-header=rewrite 8080
For multiport tunnels
tunnels:

 backend:

  addr: 8080
  proto: http
  host_header: "localhost:8080"
  
 frontend:

  addr: 8081
  proto: http
  host_header: "localhost:8081"
We have successfully tunnel our two port 8080 and 8081 over https.
Share:

Saturday, January 22, 2022

Grails 3 Download Saved Documnets/Files (.pdf, .txt, .docx and others) Example.

How to Download different types of files using Grails 3.

If you want to use server-side file download for different types of files like pdf, txt, docx etc then there are different ways to do it.

We can use ServletOutputStream object for it as below:

    def download(){
            def filePath = "/opt/tomcat/webapps/savedFile/filename.pdf" //I am saving files on tomcat.
            def file = new File(filePath)
            if (file.exists()){
                response.setContentType("application/octet-stream")
                response.setHeader("Content-disposition", "filename=${file.getName()}")
                response.outputStream << file.bytes
            }else {
                //handle file not found messages.
            }
}
Here, contentType is "application/octet-stream" for all types of file. If you want to specified for specific one set contentType for specific one.
                response.setContentType("application/pdf")
Or you can do simply like this:

    def download(){
            def filePath = "/opt/tomcat/webapps/savedFile/filename.pdf" //I am saving files on tomcat.
            def file = new File(filePath)
            if (file.exists()){
                response.setContentType("application/octet-stream")
                response.setHeader("Content-disposition", "filename=${file.getName()}")
                response.outputStream << file.newInputStream()
            }else {
                //handle file not found messages.
            }
}
def download(){
            def filePath = "/opt/tomcat/webapps/savedFile/filename.pdf" //I am saving files on tomcat.
            def file = new File(filePath)
            if (file.exists()){
                response.setContentType("application/octet-stream")
                response.setHeader("Content-disposition", "filename=${file.getName()}")
                def outputStream = response.getOutputStream()
                outputStream << file.bytes
                outputStream.flush()
                outputStream.close()
            }else {
                //handle file not found messages.
            }
}
But in grails 3 latter version while deploying the war file to external tomcat(tomcat 7) server then we might get some class not found issue while downloading files.
Error 500: Internal Server Error
URI
/patient/download
Class
java.lang.ClassNotFoundException
Message
Handler dispatch failed; nested exception is java.lang.NoClassDefFoundError: javax/servlet/WriteListener
Caused by
javax.servlet.WriteListener
In order to solve this issue, we need to make the controller's action wrap with @GrailsCompileStatic annotation.
import grails.compiler.GrailsCompileStatic

@GrailsCompileStatic
def download(){
            def filePath = "/opt/tomcat/webapps/savedFile/filename.pdf" //I am saving files on tomcat.
            def file = new File(filePath)
            if (file.exists()){
                response.setContentType("application/octet-stream")
                response.setHeader("Content-disposition", "filename=${file.getName()}")
                def outputStream = response.getOutputStream()
                outputStream << file.bytes
                outputStream.flush()
                outputStream.close()
            }else {
                //handle file not found messages.
            }
}
Share:

Securing Grails Application with Spring Security Rest | Rest API | Grails 3.x

How to Secure Grails Application with Spring Security Rest in Grails 3


Introduction:

In this tutorial, we are going to secure our grails application with spring security simply refer to spring security rest login. Here we are using grails version 3.3.0 and Java 8. This tutorial describes the configuration of Spring Security Core, Spring Security Rest with Grails 3 to secure the application.

Pre-requisites:
  1. Running Java
  2. Running Grails
Create Grails Application:


1. Create App

In order to create grails application, open your terminal or cmd and type following command to create an app.
grails create-app spring-security-rest --profile rest-api

// Application created at (path of app)
Open your favorite text editor or IDE and open the project that we recently created.

Go to a created project folder.
cd spring-security-rest
Run grails interactive mode.
grails 
In order to run and stop an application simply type "run-app" and "stop-app"

2. Create Domain Class

Let's create a dummy class to test security called "Product". You can use either terminal or your IDE.
create-domain-class Product
Add some properties in the domain class.
class Product {
    String name
    Double price
    String companyName
    String description
    Date dateCreated = new Date()
    static constraints = {
    }
}
Add some data using BootStrap.groovy
class BootStrap {

    def init = { servletContext ->
        if (Product.count() == 0){
            new Product(name: "product1", price: 10, companyName: "company1", description:"description1").save(flush:true)
            new Product(name: "product2", price: 100, companyName: "company2", description:"description2").save(flush:true)
            new Product(name: "product3", price: 1000, companyName: "company3", description:"description3").save(flush:true)
            new Product(name: "product4", price: 10000, companyName: "company4", description:"description4").save(flush:true)
        }
    }
    def destroy = {
    }
}
Create Controller for product domain.
create-restful-controller Product
It will create controller like this:
class ProductController extends RestfulController {
    static responseFormats = ['json', 'xml']
    ProductController() {
        super(Product)
    }
}
Run application. And simply make get request, as following endpoint will results "product" data in Json format that we created in BootStrap.groovy.
http://localhost:8080/product


Spring Security Core Plugin Configuration:

Under build.gradle file within dependencies add the following configuration and run command "compile"
dependencies {
    compile 'org.grails.plugins:spring-security-core:3.2.0'
}
Now its time to create user related tables,  for this exit from the interactive console.
exit
grails s2-quickstart spring.security.rest User Role
You can see:
CONFIGURE SUCCESSFUL
Total time: 2.965 secs
| Creating User class 'User' and Role class 'Role' in package 'spring.security.rest'
| Rendered template PersonWithoutInjection.groovy.template to destination grails-app/domain/spring/security/rest/User.groovy
| Rendered template PersonPasswordEncoderListener.groovy.template to destination src/main/groovy/spring/security/rest/UserPasswordEncoderListener.groovy
| Rendered template Authority.groovy.template to destination grails-app/domain/spring/security/rest/Role.groovy
| Rendered template PersonAuthority.groovy.template to destination grails-app/domain/spring/security/rest/UserRole.groovy
| 
************************************************************
* Created security-related domain classes. Your            *
* grails-app/conf/application.groovy has been updated with *
* the class names of the configured domain classes;        *
* please verify that the values are correct.               *
************************************************************
Which will create User.groovy, Role.groovy, and UserRole.groovy Domain classes.

Now let's create a user data for testing purpose using BootStrap.groovy.
def role1 = new Role(authority:"ROLE_USER").save flush:true
        def user1 = new User(username:"user@gmail.com",password:"pwd@123").save flush:true
        UserRole.create(user1,role1)
Spring Security Rest Plugin Configuration:
Under build.gradle file within dependencies add the following configuration and run command "compile"
dependencies {
    compile "org.grails.plugins:spring-security-rest:2.0.0.M2"
}
Go to application.groovy and add chainMap configuration:
grails.plugin.springsecurity.filterChain. chainMap = [
  [pattern: '/**',filters: 'JOINED_FILTERS,-anonymousAuthenticationFilter,-exceptionTranslationFilter,-authenticationProcessingFilter,-securityContextPersistenceFilter'],
  [pattern: '/**', filters: 'JOINED_FILTERS,-restTokenValidationFilter,-restExceptionTranslationFilter'] 
]
Note: If your endpoint start with https://address.com/api or /othername then your chainMap look like
grails.plugin.springsecurity.filterChain. chainMap = [
  [pattern: '/api**',filters: 'JOINED_FILTERS,-anonymousAuthenticationFilter,-exceptionTranslationFilter,-authenticationProcessingFilter,-securityContextPersistenceFilter'],
  [pattern: '/**', filters: 'JOINED_FILTERS,-restTokenValidationFilter,-restExceptionTranslationFilter'] 
]



The final configuration looks like

// Added by the Spring Security Core plugin:
grails.plugin.springsecurity.userLookup.userDomainClassName = 'spring.security.rest.User'
grails.plugin.springsecurity.userLookup.authorityJoinClassName = 'spring.security.rest.UserRole'
grails.plugin.springsecurity.authority.className = 'spring.security.rest.Role'
grails.plugin.springsecurity.controllerAnnotations.staticRules = [
 [pattern: '/',               access: ['permitAll']],
 [pattern: '/error',          access: ['permitAll']],
 [pattern: '/index',          access: ['permitAll']],
 [pattern: '/index.gsp',      access: ['permitAll']],
 [pattern: '/shutdown',       access: ['permitAll']],
 [pattern: '/assets/**',      access: ['permitAll']],
 [pattern: '/**/js/**',       access: ['permitAll']],
 [pattern: '/**/css/**',      access: ['permitAll']],
 [pattern: '/**/images/**',   access: ['permitAll']],
 [pattern: '/**/favicon.ico', access: ['permitAll']],
 [pattern: '/**',             access: ['isFullyAuthenticated()']]
]

grails.plugin.springsecurity.filterChain. chainMap = [
  [pattern: '/**',filters: 'JOINED_FILTERS,-anonymousAuthenticationFilter,-exceptionTranslationFilter,-authenticationProcessingFilter,-securityContextPersistenceFilter'],
  [pattern: '/**', filters: 'JOINED_FILTERS,-restTokenValidationFilter,-restExceptionTranslationFilter'] 
]


Testing Secured REST API:
Now re-run the application and request the endpoint "http://localhost:8080/product"


which is our application is secured. I will make a tutorial for customizing this message format.

Next test the login endpoint with our existing user created in BootStrap.groovy. Here I am using Post-man.
Now access data via authorization.

Here, for authorization key use "Bearer access_token". You can see data as shown above. Because in our application.groovy we configure in such a way that all the login users can access data.
[pattern: '/**',             access: ['isFullyAuthenticated()']]
In the next tutorial I am going to create different customization for configuration and describe about different annotations.

Share:

Friday, January 21, 2022

Splitting resources.groovy configuration in Grails applications

Splitting resources.groovy configuration in Grails applications

1. Introduction:

In grails, when our resources.groovy file is growing then it is better to split some logical bean configuration with a separate file and import that file in resources.groovy.  For this, Grails provides a neat way to specify Spring bean definitions with its custom Beans DSL by using importBeans or loadBeans.

2. How to split resources.groovy

You can simply create resources file under  "grails-app/conf/spring/" package: e.g

firstBean.groovy:
beans {
  beanConfiguration: yourBeanConfiguration
}

3. Import external bean file

Grails provide importBeans or loadBeans to import custom external bean file which can be done inside resources.groovy as bellow:
beans = { 
  importBeans('file:grails-app/conf/spring/firstBean.groovy') 
}

The problem is that this seems to work, but only if you run the application via  grails run-app with the embedded servlet container.

As soon we create a WAR file and deploy it into tomcat we are getting into trouble. The problem is Spring Bean Configuration file is moved on the different folder in tomcat "WEB-INF/classes/spring/" so we can get file-not-found-exception.

In order to resolve this problem, we need to locate custom resource path in resources.groovy.
def loadFromFile = { name ->
        importBeans("file:grails-app/conf/spring/"+name)
    }

    def loadFromWar = { name ->
        def resource = application.parentContext.getResource("WEB-INF/classes/spring/"+name)
        loadBeans(resource)
    }
    def loadResource = application.isWarDeployed() ? loadFromWar : loadFromFile

    loadResource "firstBean.groovy"
    loadResource "secondBean.groovy"



Here if the application is running via grails run-app then it will use the path "grails-app/conf/spring/". And if it is via external tomcat with deploying the WAR file then it uses the path "WEB-INF/classes/spring/".

Note: Be sure that your resource file is packaged on WAR with the right path.
Share:

How to resolve grails-resolving-server-failed-to-start-for-port-8080-address-already-in-use

On Linux:

step1: if your server port already running at port 8080 then, firstly we have to kill this process for this open your terminal and type

lsof -w -n -i tcp:8080

which shows all the list of open file which is running at port 8080. if you type
lsof -w -n -i tcp:9090 if your server is running with port 9090 which shows all the list of open file which are running at port 9090. Here, lsof stand for ls = list and of = opened file. You can see list with pid number.



step2: type sudo kill -9 6911 where 6911 is pid number. Here 9 has its own meaning which is defined as kill command "SIGKILL" you can see this by type in terminal as kill -l . You can use -SIGKILL instead of -9.

step3: run your application. 

On Windows:

- Open and run command prompt as administrator and type following command:
   
   netstat -ano | findstr :8080

This command will find and list the process that uses this port.

Now, we need to kill the process, for this use following command:

   taskkill /pid 5884 /f

Where 5884 is the PID number. Use your PID number which is obtained previously.
Share:

Error: ENOENT: no such file or directory, stat '../pancake-swap-periphery/node_modules/@uniswap/v2-core/contracts/interfaces/IPancakeFactory.sol

While trying to do pancake-swap clone, we might get the following issue while building a contract.

Error: ENOENT: no such file or directory, stat '../pancake-swap-periphery/node_modules/@uniswap/v2-core/contracts/interfaces/IPancakeFactory.sol'

It is due to the file is not found in node modules as suggested in error. If you go to the package.json file you can see the following dependency is injected inside the dependencies

"@uniswap/v2-core": "^1.0.1"

So here, pancake-swap core dependency files are not found which is imported in the contract we are building.

Let's add the required dependency from the git panackage-swap-core

replace the above dependency:
"@uniswap/v2-core": "git://github.com/pancakeswap/pancake-swap-core.git"

Now install the added dependency

npm install
After that, we can try building the contract.

Share:

Modify Dicom file metadata using dcm4che in Java

How to modify Dicom file metadata using dcm4che.

Please follow our previous tutorials on dcm4che.

Read the Dicom header information using dcm4che in Java

Read Dicom file metadata using dcm4che in Java

Let's create a sample java class ModifyMetadata.java to modify the available metadata in the given Dicom file.

 public static void main(String[] args) {
        String inputDicomPath = "path/to/dicom/N2D_0001.dcm";
        String outputDicomPath = "output/path/to/dicom/N2D_0001.dcm";
        try {
            modifyMetadata(inputDicomPath, outputDicomPath);
        } catch (IOException e) {
            System.out.println("Error due to: "+e.getMessage());
        }
    }
 private static void modifyMetadata(String inputDicomPath, String outputDicomPath) throws IOException {
        File file = new File(inputDicomPath);
        DicomInputStream dis = new DicomInputStream(file);
        Attributes attributes = dis.readDataset();
        if (attributes.contains(Tag.PatientID))
            attributes.setString(Tag.PatientID, VR.LO, "L500400");
        if (attributes.contains(Tag.PatientName))
            attributes.setString(Tag.PatientName, VR.PN, "Test name");
        DicomOutputStream dos = new DicomOutputStream(new File(outputDicomPath));
        attributes.writeTo(dos);
        dis.close();
        dos.close();
    }

Here we are using the dcm4che standard library classes. Please follow our previous tutorial to set up dcm4che jar files. We have created modifyMetadata() method which accepts two arguments one for input Dicom image path and another is output Dicom image path where we are writing the file after modification.

DicomInputStream will read the file. We are reading all the available data set to get the attributes and try to modify the patient ID and the patient name.

After modification, DicomOutputStream will write the modified attributes to the output path. If we open the image and see, the name and patient id are changed. We can modify any available metadata similar to the above example. For Tag and VR follow the Dicom tag library


The overall code implementation looks as below

package dicom.dcm4che;

import org.dcm4che3.data.Attributes;
import org.dcm4che3.data.Tag;
import org.dcm4che3.data.VR;
import org.dcm4che3.io.DicomInputStream;
import org.dcm4che3.io.DicomOutputStream;

import java.io.File;
import java.io.IOException;

public class ModifyMetadata {

    public static void main(String[] args) {
        String inputDicomPath = "path/to/dicom/N2D_0001.dcm";
        String outputDicomPath = "output/path/to/dicom/N2D_0001.dcm";
        try {
            modifyMetadata(inputDicomPath, outputDicomPath);
        } catch (IOException e) {
            System.out.println("Error due to: "+e.getMessage());
        }
    }

    private static void modifyMetadata(String inputDicomPath, String outputDicomPath) throws IOException {
        File file = new File(inputDicomPath);
        DicomInputStream dis = new DicomInputStream(file);
        Attributes attributes = dis.readDataset();
        if (attributes.contains(Tag.PatientID))
            attributes.setString(Tag.PatientID, VR.LO, "L500400");
        if (attributes.contains(Tag.PatientName))
            attributes.setString(Tag.PatientName, VR.PN, "Test name");
        DicomOutputStream dos = new DicomOutputStream(new File(outputDicomPath));
        attributes.writeTo(dos);
        dis.close();
        dos.close();
    }
}
Share:

Read Dicom file meatadata using dcm4che in Java

How to read Dicom file metadata using dcm4che in Java.

dcm4che is a collection of open-source applications and utilities for the healthcare enterprise application for processing, manipulating, and analysis of medical images.

Please follow the tutorial for setting the dcm4che in our java application.

Let's create a java class DicomMetadataReader.java to read the available metadata in the given Dicom file.

public static void main(String[] args) {
        String inputDicomFilePath = "path/to/dicom/N2D_0001.dcm";
        try {
            readMetadata(inputDicomFilePath);
        } catch (IOException e) {
            System.out.println("Error due to: "+e.getMessage());
        }
    }
private static void readMetadata(String dicomFilePath) throws IOException {
        File file = new File(dicomFilePath);
        DicomInputStream dis = new DicomInputStream(file);
        Attributes attributes = dis.readDataset();
        int[] tags = attributes.tags();
        System.out.println("Total tag found in dicom file: "+tags.length);
        for (int tag: tags) {
            String tagAddress = TagUtils.toString(tag);
            String tagValue = attributes.getString(tag);
            System.out.println("Tag Address: " + tagAddress + " Value: " + tagValue);
        }
        dis.close();
    }

Here, we are using standard classes from the dcm4che core library jar file. DicomInputStream will read the file. We are reading the available dataset of the file and getting tags. After that, we are looping through the available tags and get the tag address and corresponding tag value.

The sample Output:

Total tag found in dicom file: 54
Tag Address: (0008,0008) Value: DERIVED
Tag Address: (0008,0016) Value: 1.2.840.10008.5.1.4.1.1.4
Tag Address: (0008,0018) Value: 1.2.826.0.1.3680043.2.1143.1590429688519720198888333603882344634
Tag Address: (0008,0020) Value: 20130717
Tag Address: (0008,0021) Value: 20130717
Tag Address: (0008,0022) Value: 20130717
Tag Address: (0008,0023) Value: 20130717
Tag Address: (0008,0030) Value: 141500
Tag Address: (0008,0031) Value: 142035.93000
Tag Address: (0008,0032) Value: 132518
Tag Address: (0008,0033) Value: 142035.93
Tag Address: (0008,0050) Value: null
Tag Address: (0008,0060) Value: MR
Tag Address: (0008,0070) Value: BIOLAB
Tag Address: (0008,0080) Value: null
Tag Address: (0008,0090) Value: null
Tag Address: (0008,1030) Value: Hanke_Stadler^0024_transrep
Tag Address: (0008,103E) Value: anat-T1w
Tag Address: (0008,1090) Value: nifti2dicom
Tag Address: (0010,0010) Value: Jane_Doe
Tag Address: (0010,0020) Value: 02
Tag Address: (0010,0030) Value: 19660101
Tag Address: (0010,0040) Value: F
Tag Address: (0010,1000) Value: null
Tag Address: (0010,1010) Value: 42
Tag Address: (0010,1030) Value: 75
Tag Address: (0010,21C0) Value: 4
Tag Address: (0018,0050) Value: 0.666666686534882
Tag Address: (0018,0088) Value: 0.666666686534882
Tag Address: (0018,1020) Value: 0.4.11
Tag Address: (0018,1030) Value: anat-T1w
Tag Address: (0020,000D) Value: 1.2.826.0.1.3680043.2.1143.2592092611698916978113112155415165916
Tag Address: (0020,000E) Value: 1.2.826.0.1.3680043.2.1143.515404396022363061013111326823367652
Tag Address: (0020,0010) Value: 433724515
Tag Address: (0020,0011) Value: 401
Tag Address: (0020,0012) Value: 1
Tag Address: (0020,0013) Value: 1
Tag Address: (0020,0020) Value: L
Tag Address: (0020,0032) Value: -91.4495864331908
Tag Address: (0020,0037) Value: 0.999032176441525
Tag Address: (0020,0052) Value: 1.2.826.0.1.3680043.2.1143.6856184167807409206647724161920598374
Tag Address: (0028,0002) Value: 1
Tag Address: (0028,0004) Value: MONOCHROME2
Tag Address: (0028,0010) Value: 384
Tag Address: (0028,0011) Value: 274
Tag Address: (0028,0030) Value: 0.666666686534882
Tag Address: (0028,0100) Value: 16
Tag Address: (0028,0101) Value: 16
Tag Address: (0028,0102) Value: 15
Tag Address: (0028,0103) Value: 1
Tag Address: (0028,1052) Value: 0
Tag Address: (0028,1053) Value: 1
Tag Address: (0028,1054) Value: US
Tag Address: (7FE0,0010) Value: 0

Please follow the Dicom standard library for tag and its description.

The overall code implementation looks as below:

package dicom.dcm4che;

import org.dcm4che3.data.Attributes;
import org.dcm4che3.io.DicomInputStream;
import org.dcm4che3.util.TagUtils;

import java.io.File;
import java.io.IOException;

public class DicomMetadataReader {

    public static void main(String[] args) {
        String inputDicomFilePath = "path/to/dicom/N2D_0001.dcm";
        try {
            readMetadata(inputDicomFilePath);
        } catch (IOException e) {
            System.out.println("Error due to: "+e.getMessage());
        }
    }

    private static void readMetadata(String dicomFilePath) throws IOException {
        File file = new File(dicomFilePath);
        DicomInputStream dis = new DicomInputStream(file);
        Attributes attributes = dis.readDataset();
        int[] tags = attributes.tags();
        System.out.println("Total tag found in dicom file: "+tags.length);
        for (int tag: tags) {
            String tagAddress = TagUtils.toString(tag);
            String tagValue = attributes.getString(tag);
            System.out.println("Tag Address: " + tagAddress + " Value: " + tagValue);
        }
        dis.close();
    }
}
Share: