Websphere, SQL Server and Deadlock

How avoid deadlock with Websphere and SQL Server

When you create a datasource in Websphere for a SQL Server database, the default isolation level is Repeatable Read. At first it seems the best choice, but repeatable read as isolation level means the presence of table lock, and with table lock, a deadlock can happen at any time!

For example, considere this application stack:

Java 8
Spring Boot 2.4.1
Websphere 8.5.5
SQL Server 2008

If you need a transaction with REQUIRES_NEW propagation, a deadlock can happen if you are going to update the same tables in the parent and child transaction. This because the isolation level is Repeatable Read.

The solution is to relax the isolation level to Read Committed. You avoid deadlock and with hibernate optimistic lock, there is no possibility to miss information.

To set the isolation level of a datasource in Websphere, you have to navigate the path:

Data sources > [datasource name] > Custom properties

and set

webSphereDefaultIsolationLevel=2

where 2 means Read Committed.

That’s what you need to avoid deadlock!

And remember, if you need a pessimistic lock, with JPA is simple to set table lock for a single query (see for example my previous post about it).

Advertisement

Using Spring Security 5 to integrate with OAuth2 secured RESTFull API without login and servlet context.

Howto config a Spring Security OAuth2 client that is capable of operating outside of the context of a HttpServletRequest,
e.g. in a scheduled/background thread and/or in the service-tier.

With the new Spring Security 5, there are a lot of examples about howto configure a client to access service like, Facebook, GitHub and many others with the standard OAuth2.
But today I found diffulties to get documentations about howto access OAuth2 secured RESTFull API with a RestTemplate client, without login and servlet context. And so I had to debug Spring Security framework to figure out the right configuration.

My env is as a follow:

Java 8
Spring Boot 2.4.1
Spring Security 5.4.2
Spring Web 5.3.2

The pom dependencies are:

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-oauth2-client</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-security</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>

The api that I have to access are secured with OAuth2 with GRANT_TYPE=PASSWORD and client auth method equals to POST.
Every call to the Api must contains an Authorization header with the access token of type Bearer.
To obtain the access token, we need a token uri, a client id and the client username/password. All the information must be provided by the resource server.

What we need is a RestTemplateConfig. The file of this example can be found here.

We’re going to see the config step by step. First of all we need a ClientRegistrationRepository

    Builder b = ClientRegistration.withRegistrationId(registrationId);
    b.authorizationGrantType(AuthorizationGrantType.PASSWORD);
    b.clientAuthenticationMethod(ClientAuthenticationMethod.POST);
    b.tokenUri(tokenUri);
    b.clientId(clientId);
    ClientRegistrationRepository clients = new InMemoryClientRegistrationRepository(b.build());

the tokenUri end the clientId must be provided by the resource server.

Then we need the service:

    OAuth2AuthorizedClientService service = new InMemoryOAuth2AuthorizedClientService(clients);

the authorized client provider:

    OAuth2AuthorizedClientProvider authorizedClientProvider = OAuth2AuthorizedClientProviderBuilder.builder().password().refreshToken().build();

and the manager:

    AuthorizedClientServiceOAuth2AuthorizedClientManager manager = new AuthorizedClientServiceOAuth2AuthorizedClientManager(
            clients, service);
    manager.setAuthorizedClientProvider(authorizedClientProvider);
    manager.setContextAttributesMapper(new Function<OAuth2AuthorizeRequest, Map<String, Object>>() {

        @Override
        public Map<String, Object> apply(OAuth2AuthorizeRequest authorizeRequest) {
            Map<String, Object> contextAttributes = new HashMap<>();
            String scope = authorizeRequest.getAttribute(OAuth2ParameterNames.SCOPE);
            if (StringUtils.hasText(scope)) {
                contextAttributes.put(OAuth2AuthorizationContext.REQUEST_SCOPE_ATTRIBUTE_NAME,
                        StringUtils.delimitedListToStringArray(scope, " "));
            }

            String username = authorizeRequest.getAttribute(OAuth2ParameterNames.USERNAME);
            if (StringUtils.hasText(username)) {
                contextAttributes.put(OAuth2AuthorizationContext.USERNAME_ATTRIBUTE_NAME, username);
            }

            String password = authorizeRequest.getAttribute(OAuth2ParameterNames.PASSWORD);
            if (StringUtils.hasText(password)) {
                contextAttributes.put(OAuth2AuthorizationContext.PASSWORD_ATTRIBUTE_NAME, password);
            }

            return contextAttributes;
        }

    });

then we can add an interceptors to the RestTemplate:

@Bean
public RestTemplate restTemplate(RestTemplateBuilder builder, OAuth2AuthorizedClientManager manager) {
    RestTemplate restTemplate = builder.build();
    restTemplate.getInterceptors().add(new BearerTokenInterceptor(manager, username, password, registrationId));

    return restTemplate;
}


public class BearerTokenInterceptor implements ClientHttpRequestInterceptor {
    private final Logger LOG = LoggerFactory.getLogger(BearerTokenInterceptor.class);

    private OAuth2AuthorizedClientManager manager;
    private String username;
    private String password;
    private String registrationId;

    public BearerTokenInterceptor(OAuth2AuthorizedClientManager manager, String username, String password, String registrationId) {
        this.manager = manager;
        this.username = username; 
        this.password = password;
        this.registrationId = registrationId; 
    }

    @Override
    public ClientHttpResponse intercept(HttpRequest request, byte[] bytes, ClientHttpRequestExecution execution)
            throws IOException {
        String accessToken = null;
        OAuth2AuthorizedClient client = manager.authorize(OAuth2AuthorizeRequest.withClientRegistrationId(registrationId)
                .attribute(OAuth2ParameterNames.USERNAME, username)
                .attribute(OAuth2ParameterNames.PASSWORD, password)
                .principal(principal).build()); 
        accessToken = client.getAccessToken() != null ? client.getAccessToken().getTokenValue() : null;
        if (accessToken != null) {
            LOG.debug("Request body: {}", new String(bytes, StandardCharsets.UTF_8));
            request.getHeaders().add("Authorization", "Bearer " + accessToken);
            return execution.execute(request, bytes);
        } else {
            throw new IllegalStateException("Can't access the API without an access token");
        }
    }

}

before every call, the manager try to authorize the client with the username and password provided by the resource server. If the authentication is successful, the server return a json like this:

{
“access_token”: “hjdhjYU00jjTYYT….”,
“token_type”: “Bearer”,
“expires_in”: “3600”,
“refresh_token”: “hdshTT55jhds…”,
}

Since the server support refresh token, we have configured the authorizedClientProvider to manage the refresh token in case the access token provided is expired.

That’s all!

Converting fixed fields text record to JSON

Using a REST Service for converting fixed fields text record to json with Fixedfid java library.

Converting fixed fields text record to JSON can be realized in many ways. A solution can be the using of a REST Service combined with the Fixefid java library.

The environment is as a follows:

  • Java 8
  • Spring Boot 2.3.4.RELEASE
  • Spring Web
  • Fixefid 1.1.0
  • Spring Doc Openapi 1.5.0

The Fixefid java library permits to define a fixed fields text record with Java Bean or Java Enum. In this case the definition by Java Bean can be used to annotate a resource representation class of a REST Service.

For instance, we want converting a customer record like this one:

String record = "0000000000000000001Paul                                              Robinson                                          ";

to a json object like this one:

{
"id": 1,        
"firstName": "Paul",        
"lastName": "Robinson"    
}

To model the customer representation, we can create a resource representation class:

@FixefidRecord
public class Customer {
	@FixefidField(fieldLen = 19, fieldOrdinal = 0, fieldType = FieldType.N)
	private Long id;
@FixefidField(fieldLen = 50, fieldOrdinal = 1, fieldType = FieldType.AN)
private String firstName;

@FixefidField(fieldLen = 50, fieldOrdinal = 2, fieldType = FieldType.AN)
private String lastName;

protected Customer() {
}

public Customer(String firstName, String lastName) {
    this.firstName = firstName;
    this.lastName = lastName;
}

@Override
public String toString() {
    return String.format("Customer[id=%d, firstName='%s', lastName='%s']", id, firstName, lastName);
}

public Long getId() {
    return id;
}

public String getFirstName() {
    return firstName;
}

public String getLastName() {
    return lastName;
}
}

The resource representation class is annotated with the Fixefid annotations. Then we can create the record request:

public class RecordRequest {
	private Long requestId;
	private String record;

public String getRecord() {
    return record;
}

public void setRecord(String record) {
    this.record = record;
}

public Long getRequestId() {
    return requestId;
}

public void setRequestId(Long requestId) {
    this.requestId = requestId;
}
}

and the Customer response:

public class CustomerResponse {
	private Long requestId;
	private Long responseId;
	private Customer customer;

public CustomerResponse(Long requestId, Long responseId, Customer customer) {
    this.requestId = requestId;
    this.responseId = responseId;
    this.customer = customer;
}

public Long getRequestId() {
    return requestId;
}
public void setRequestId(Long requestId) {
    this.requestId = requestId;
}
public Long getResponseId() {
    return responseId;
}
public void setResponseId(Long responseId) {
    this.responseId = responseId;
}
public Customer getCustomer() {
    return customer;
}
public void setCustomer(Customer customer) {
    this.customer = customer;
}
}

last, the rest controller:

@RestController
public class CustomerController {
	private final AtomicLong counter = new AtomicLong();

        @PostMapping(path = "/recordtocustomer", consumes = 
        "application/json", produces = "application/json")
        public CustomerResponse recordToCustomer(@RequestBody RecordRequest 
           request) {
        Customer customer = new Customer(null, null);
        new BeanRecord(customer, request.getRecord());
        return new CustomerResponse(request.getRequestId(), 
        counter.incrementAndGet(), customer);
     }
}

With Postman we can test the service:

Here the project of the example on github.

Force dynamic wsdl address location to https scheme

Nowadays all web applications must be secure with https protocol. A typical scenario involve a load balancer wich redirect all incoming https connections, to application servers responding on http port

We have had a problem with wsdl dynamic address location, exposed by an old web application based on Spring MVC 3.0 behind a Load Balancer. A dynamic wsdl is created at runtime by spring-ws with the tag dynamic-wsdl in the spring-ws context:

<sws:dynamic-wsdl id="myService"  portTypeName="myServiceSoap"  
   locationUri="/ws/MyServiceService" targetNamespace="http://www.myapp.it/myapp/schema/myappws">
              <sws:xsd location="/WEB-INF/wsdl/MyService.xsd"/>
</sws:dynamic-wsdl>

the portion of the service wsdl generated was like this:

<wsdl:service name="MySoapService">
    <wsdl:port binding="tns:MySoapSoap11" name="MySoapSoap11">        
         <soap:address location="http://www.myapp.com/myapp/ws/MyService"/>
    </wsdl:port>
</wsdl:service>

The scheme of the address location was http instead that https. To solve this, we have to force the schema to https. We have to create a new bean in the application context:

<bean id="myServiceWsdlDefinitionHandlerAdapter" name="wsdlDefinitionHandlerAdapter" class="it.github.parmag.MyServiceWsdlDefinitionHandlerAdapter">

The name must be “wsdlDefinitionHandlerAdapter”. Then the bean could be like this:

public class MyServiceWsdlDefinitionHandlerAdapter extends WsdlDefinitionHandlerAdapter {
	public MyServiceWsdlDefinitionHandlerAdapter () {
	}

	@Override
	protected String transformLocation(String location, HttpServletRequest request) {
		String newLocation = super.transformLocation(location, request);
		newLocation = "https" + newLocation.substring(newLocation.indexOf(":"));
		return newLocation;
	}
}

Now, the address location is https:

<wsdl:service name="MySoapService">
    <wsdl:port binding="tns:MySoapSoap11" name="MySoapSoap11">        
         <soap:address location="https://www.myapp.com/myapp/ws/MyService"/>
    </wsdl:port>
</wsdl:service>

Howto deal with fixed fields text record and JPA Entity

If the persistance layer is realized with JPA, we can mapping record fields directly to Entity fields

Fixefid is a java library wich permits to define a fixed fields text record with Java Bean or Java Enum. Often a text record must be retrieved from data persisted on a database. Or a text record must be persisted on a database.

The solution is to define a mapping from the record’s fields with the persistence model. If the persistance layer is realized with JPA Entities, the mapping can be done directly to the JPA Entity with the Fixefid annotations. Infact a JPA Entity is a POJO, that’s a Java Bean. And so, we can annotate the JPA Entity with the Fixefid annotations to realize the mapping, without the need to create two models, one for the record and another one for the JPA Entity.

The environment is as a follows:

  • Java 8
  • Spring Boot 2.3.4.RELEASE
  • Spring Data JPA
  • Fixefid 1.1.0
  • H2 Database

For example we can have a Customer bean like this one:

@Entity
@FixefidRecord
public class Customer {
	@Id
	@GeneratedValue(strategy = GenerationType.AUTO)
	@FixefidField(fieldLen = 19, fieldOrdinal = 0, fieldType = FieldType.N)
	private Long id;
	
	@FixefidField(fieldLen = 50, fieldOrdinal = 1, fieldType = FieldType.AN)
	private String firstName;
	
	@FixefidField(fieldLen = 50, fieldOrdinal = 2, fieldType = FieldType.AN)
	private String lastName;

	protected Customer() {
	}

	public Customer(String firstName, String lastName) {
		this.firstName = firstName;
		this.lastName = lastName;
	}

	@Override
	public String toString() {
		return String.format("Customer[id=%d, firstName='%s', lastName='%s']", id, firstName, lastName);
	}

	public Long getId() {
		return id;
	}

	public String getFirstName() {
		return firstName;
	}

	public String getLastName() {
		return lastName;
	}
}

The Customer above is annotated with Entity and FixefidRecord. The fields are annotated with FixefidField and other JPA annotations. To obtain the record from the database:

Customer customer = repository.findById(1L);
String record = new BeanRecord(customer).toString();

To save the record to the database:

String newRecord = "0000000000000000001Paul                                              Robinson                                          ";
Customer newCustomer = new Customer();
new BeanRecord(newCustomer, newRecord);
repository.save(newCustomer);

I made a video tutorial where the example above is explained in the detail way.

Here the project of the example on github.

Disable Spring Cloud Stream support for testing

TestSupportBinder is a minimal binder that does nothing and is not useful for integration test between services

A short time ago we started the development of a project based on Microservices Architecture. The intent is to create services based on REST Api, which receive messages and write them on Kafka topics. Other services read messages from the Kafka topics and write them to the database.

The environment is as follows:

  • Java 11
  • Spring Core 5.1
  • Spring Boot 2.1
  • Spring Cloud Stream 2.1
  • Spring Web 5.1
  • Apache Kafka 2.0.1

The IDE is the new STS 4.  By default, if you use Spring Initializer to create a Spring Cloud Stream based project, support for the Spring Cloud Test dependency is added:

<dependency>
     <groupId>org.springframework.cloud</groupId>
     <artifactId>spring-cloud-stream-test-support</artifactId>
     <scope>test</scope>
</dependency>

the dependency ensures that the TestSupportBinder class can be used for the test phase. TestSupportBinder is a minimal binder that does nothing about binding consumers.
I find that class not very useful, even for the test phase itself. Surely it is not useful for integration test between the various services.
In fact, in our case, by launching services from STS, you could write on the topic but the listeners received no messages. By launching services via maven instead, everything worked fine. This is because the scope test of the dependency assures that by launching the command

mvn spring-boot:run

the TestSupportBinder class is not loaded by Spring Boot autoconfiguration. However, if we run the application from STS (in RUN or DEBUG), TestSupportBinder is loaded, and not Kafka as desired. To disable the test from STS, you need to add the annotation

@SpringBootApplication(exclude = TestSupportBinderAutoConfiguration.class)

as in this example:

@SpringBootApplication(exclude = TestSupportBinderAutoConfiguration.class)
@EnableBinding(MsgStreams.class)
public class StreamsConfig {

}

or, even better, add the following line in the application.properties file:

spring.autoconfigure.exclude=org.springframework.cloud.stream.test.binder.TestSupportBinderAutoConfiguration

in this way, the integration test between the various services will go well!

Partial commit and job restart with Spring Batch

If a batch fails after partial commit, it must be possible to start again with the processing of the file by skipping the lines already committed

A classic batch is the processing of a file, for which records are read, for each record the data are processed and are persisted on database (reader, processor and writer).

In case the file is large and contains thousands of records, partial commits must be expected during processing. For example, every 1000 records, we can decide to commit the processing on the database.

Through Spring Batch it is very easy to get partial commits, it’s a simple parameter that is passed to the StepBuilder. In the case that it’s necessary to implement more complex partial commit policies, it is possible to implement custom completition policies (which in this case we will not see because it is not the subject of this article).

Finally, if a batch fails after partial commit, it must be possible to start again with the processing of the file by skipping the lines already committed. This last thing is also expected by Spring Batch, but we must add a few lines of code, it is not a simple configuration parameter. Also in this case it’s possible to implement custom skip and retry policies (and also in this case we will not see why not the subject of this article).

The environment is as follows:

  • Java7
  • Spring Boot 1.1.8
  • Spring Batch 3.0

There are two concepts, the job instance and job execution. An instance of a job is accomplished through n executions (typically one, or more if there have been failures). Furthermore, only a failed job can be restarted.

To implement the restart we need the jobRegistry, jobOperator, jobExplorer and jobLauncher. Here is the complete code of the batch configuration.

Basically these are the steps:

  • register the job in the jobRegistry
  • get job instances through the jobOperator
  • given the last instance, get executions through the jobOperator
  • through the jobExplorer check if the last execution has failed
  • in case the last execution has failed, the job must be restarted via the jobOperator
  • in case the last execution was successful, launch a new job instance via the jobLauncher

Spring Batch takes care of managing the restart starting from the first uncommitted record.

Thanks Spring Batch 🙂

 

 

 

 

Data Masking with JPA and Spring Security

The protection of sensitive data is an increasingly popular topic in IT applications

The protection of sensitive data is an increasingly popular topic in IT applications. Also in our case, a customer asked us, on an already existing web application, to implement a data masking solution that is dynamic and based on security profiles.

The application is developed in Java, with Spring MVC for the management of the Model View Controller, JPA for data access and Spring Security for the management of security profiles.

There are two approaches in literature: SDM (Static Data Masking) and DDM (Dynamic Data Masking).

SDM

SDM plans to clone the current database by masking sensitive data. Specific inquiry applications that provide data masking can read from the cloned database.

Advantages:

  • performance of data access at runtime

Disadvantages:

  • data read can be not updated (update takes place via batch and, depending on the mode, the update can last from minutes to hours)
  • not ideal for a role-based / field-based security scenario

DDM

DDM plans to mask data when it is read at runtime.

Advantages:

  • real data reading,
  • ideal for a role-based / field-based security scenario

Disadvantages:

  • read / write overhead performance
  • possible unmusk algorithms to avoid data corruption (to prevent the masked data from persisting on the DB)

Given the customer’s requests, the DDM technique is the one that best suits a dynamic scenario based on security profiles.

At this point another choice had to be made because for DDM there are two approaches:

JPA Rewriting

In the literature we talk about SQL Rewriting, in our specific case JPA rewriting, JPA being our data access layer. The data is masked in a PostLoad or PostUpdate annotated method of a JPA Entity Listener, that means in the persistent layer.

Advantages:

  • punctual masking of the data in the load phase from the DB
  • easy data-masking mapping

Disadvantages:

  • masking depending on the data type (for example a string can be masked with ‘***’, or with ‘###’, a number with ‘000’ or ‘999’, a date with ’99 / 99 / 9999 ‘, etc etc …)
  • difficulty in the Look & Feel for rendering the view if the data is masked (each view should declare the masking … re-enter in the case of View rewriting below)
  • unmask algorithms that use the user session to store unmasked data. JPA makes shering of objects loaded by DB, so it is not said that an object loaded by an inquiry function is not then used for an update function. In this case the masked data would be persisted on DB, that means data corruption
  • complex make the masking dependent on the function (use of the user session for function-masking mapping)
  • complex use of the user session (see above for unmask and function-masking mapping)

View Rewriting

The data is masked in the presentation layer, typically in jsp pages.

Advantages:

  • homogeneous masking (does not depend on the type of data, everything can be masked for example with ‘***’)
  • it is not required unmusk phase
  • easy rendering for a look & feel (each view declares whether or not it wants masking)
  • easy to make it dependent function (each function declare whether or not it wants masking)

Disadvantages:

  • not punctual masking (all the views must mask … the tags reused by the view simplify, but not completely)
  • difficult data-masking mapping (each view must declare the data)

We chose to adopt the View Rewriting, because analyzing the effort (which in this article omits because not relevant), it was, more or less, similar between the two approaches, while the risk of data corruption and out of memory exceptions of user session are absent. Moreover the View Rewriting solution is much more customizable for what concerns the Look & Feel

To implement the solution we need the following things in detail:

  • a generic editor to enable or not a field for masking
  • a masking class that performs data masking based on security profiles
  • to modify all existing views to use the masking class above

Let’s see in detail

Role-based security mapping

We use a role-based security mapping based on Spring Security (already present in the application). For any data that you want to mask, a role is created made like this:

ROLE_MASK_DOMAIN-NAME_FIELD-NAME

for example, if I want to mask the tax code field of the people table, since the field is mapped via JPA in Person.taxCode, the role will be

ROLE_MASK_PERSON_TAXCODE

The mapping editing is managed dynamically with a special GUI function. We used the existing Domanin Editor function, a generic domain editor that for all domain classes it allows the modification of all the fields mapped to the database.
We have added a new editing form for managing data-masking mapping.
The form will contain all the fields of the chosen domain class. For each field you can choose (with a special checkbox) whether or not to enable the relative masking. When saving, the function performs the following steps:

  • look in the Authorities table if the role ROLE_MASK_DOMAIN-NAME_FIELD-NAME exists. If it does not exist it creates (the opposite if the field must be disabled)

For mapping with profiles (Spring Security Groups) are used the already present Spring Security functions implemented in the appropriate View of the application.

Masking class

Creation of a class that receives as input the data to be masked and its name (for example, Person.taxCode).
The class looks for (with the methods that provide Spring Secutiry) if the current user’s profile is associated with the corresponding field (ROLE_MASK_PERSON_TAXCODE / Person.taxCode). If exists, the class mask the data and returns it to the view.

Change Views

The functions that provide for masking the data are typically those of inquiry. In our case it helps us the fact that we have adopted tags in the presentation layer so all the shows and lists use a display.tagx tag and a table.tagx tag. We need to change these two tags to make them use the masking class.
The longest work concerns modifies all jsps that use the two tags, which must declare the name of the field they are viewing.

Finally we have also modified the search filters to make sure that if the filter provides the search for a field that must be masked, the filter is disabled.
For example, if the filter requires a search for tax code, the filter must use the masking class to know at runtime if the profile expects to mask this data.
If so, the filter is disabled.

Conclusions

View Rewriting with role based security is the best solution for the following reasons:

  • effort slightly greater than the JPA Rewriting solution but more or less similar
  • use of spring security to map the data to be masked to the profile
  • greater custom in terms of look & feel
  • absence of data corruption risk
  • absence of user session out of memory risk

Spring Boot, Spring Batch and exit codes

When creating batches to be invoked by a scheduler, it is very important to correctly manage the JVM exit codes.

When creating batches to be invoked by a scheduler, it is very important to correctly manage the JVM exit codes.
By convention, the JVM ends with an exit code equal to zero if there were no problems, otherwise with an exit code greater than zero.
In this way if the batch is not terminated correctly, interpreting the exit code, the scheduler can for example inform the application manager via email, or adopt strategies to relaunch or recover the batch itself, or terminate a job box.

If you use Spring Boot to start a Spring Batch-based batch, the JVM always ends with an exit code of zero, even in the case of runtime exceptions. In order to correctly manage the JVM exit codes, it is necessary to intervene by means of an ExitCodeGenerator.

The application stack is composed of:

Spring Core 4.0.7
Spring Boot 1.1.8
Spring Batch 3.0.1

in the class that configures the batch, we need to add the following methods:

@Bean public JobExecutionExitCodeGenerator jobExecutionExitCodeGenerator() {

return new JobExecutionExitCodeGenerator();

}

protected JobExecution addToJobExecutionExitCodeGenerator(JobExecution jobExecution) {

JobExecutionExitCodeGenerator jobExecutionExitCodeGenerator = jobExecutionExitCodeGenerator(); jobExecutionExitCodeGenerator.onApplicationEvent(new JobExecutionEvent(jobExecution)); return jobExecution;

}

as ExitCodeGenerator we can use the default implementation of Spring Boot which is JobExecutionExitCodeGenerator. So in the addToJobExecutionExitCodeGenerator method we pass the jobExecution to the exit code generator forcing the creation of the JobExecutionEvent event. When we launch the job, we must force the call to the addToJobExecutionExitCodeGenerator method:

addToJobExecutionExitCodeGenerator(jobLauncher.run(job(), jobParameters(jobParametersMap)));

In this way, when we end the batch in the Application class, the exit code will be the one actually returned from the batch:

int exitCode SpringApplication.exit(SpringApplication.run(batchConfiguration, args)); System.exit(exitCode);

Howto migrate LDAP users to Spring Security JDBC

Howto migrate LDAP users to Spring Security JDBC

Lately we are redoing a web portal whose users who have access to the private area of ​​the application are registered through LDAP.
The only request made by the contractor is that the transition to the new portal must be transparent to registered users. This translates into the fact that users do not have to change the password the first time they log in to the new portal. Passwords are stored in the LDAP repository with SSHA (Salted SHA) encoding.
Our application uses Spring Security to manage security and access to the reserved area. Spring Security supports various types of authentication including LDAP itself. As a first idea we thought to use the same LDAP repository already present. After analyzing this solution in detail we have thought not to take this solution for various reasons.
The first is to map the roles related to the permissions of the old application to the roles of our application (feasible but not very nice from the point of functional view). Furthermore, having to maintain two separate servers, an LDAP and a DBMS, when it is possible to have only one DBMS server, is not a good thing from the point of management costs.
So we thought about using the classic Spring Security JDBC authentication. The users will be migrated through a batch that will load the unloading LDAP users (download, for example, done in the csv format) to the JDBC tables. To ensure that password encryption remains the same, just configure Spring Security to use the LdapShaPasswordEncoder class. To do this you need to define the following bean in WebMvcConfiguration:

@Bean
public LdapShaPasswordEncoder passwordEncoderLDAP () {
return new LdapShaPasswordEncoder ();
}

end using it in the AuthenticationManagerBuilder defined in WebSecurityConfiguration like this:

@Autowired
private LdapShaPasswordEncoder ldapPasswordEncoder;

@Override
protected void configure (AuthenticationManagerBuilder auth) throws Exception {
auth.userDetailsService (webJdbcUserDetailsManager).passwordEncoder (ldapPasswordEncoder);
}

in this way the password coding will be the same used by LDAP and for users the transition to the new portal will be transparent.