Friday, November 21, 2014

High CPU and/or stuck thread diagnosis

  1. You can configure max stuck thread timeout in WebLogic admin console
  2. When you get an alert or observe high CPU, do a top or prstat command
  3. #2 will identify high cpu consuming java processes
  4. Convert the PID to hex to get thread number
  5. Do a thread dump (using kill -3 or admin console or jstack) and identify the thread
  6. Look at where it is stuck or processing
http://middlewaremagic.com/weblogic/?tag=stuck-thread
http://www.munzandmore.com/2012/ora/weblogic-stuck-threads-howto

xterm setup for GUI access on Linux

1.Logon to the host with your id using MobaXterm (or putty)
2. echo $DISPLAY
3. xauth list
you should get something like:      xyz1064/unix:10  MIT-MAGIC-COOKIE-1  3a6eab1ace1c1cd28ea0bc1569e0d112   
4. sudo su – oracle
5. $ export DISPLAY=(the same value your received in step 2)
6. $ xauth add xyz1064/unix:10  MIT-MAGIC-COOKIE-1  3a6eab1ace1c1cd28ea0bc1569e0d112  (the same string you received in step 3)

7. xclock    - to test it

Tuesday, October 21, 2014

Weblogic SSL Setup


Identity keystore - For others to access WebLogic using https
This will be used to store the server certificate(private key/digital certificate pairs). When the client contacts server the digital certificate presented in this keystore will be sent. You may also need to store root and intermediate certificates in truststore.

Trust Keystore - For WebLogic to access others (consumes webservices) using https
This will contain all the certificates for the trusted partners(ie partners/clients). When server connects with the partner it will use this key store.

http://weblogicserveradministration.blogspot.com/2013/03/weblogic-server-ssl-configuration.html


WebLogic SSL self-signed certificate setup

Server49 - admin server and managedserver1
Server50 - managedserver2

on Server49 (repeat on Server50 )

  1. Generate key store and generate key

  2. keytool  -genkey -alias Server49 -keyalg RSA -keysize 1024 -validity 3650 -keypass cat360pa -keystore /appserver/Weblogic/admin/certs/Server49.jks -storepass cat360pa

  3. Save certificate in key store

  4. keytool  -export -alias Server49 -file /appserver/Weblogic/admin/certs/Server49.cer -keystore /appserver/Weblogic/admin/certs/Server49.jks -storepass cat360pa

  5. Save public key in trust store

keytool -import -alias Server49 -file /appserver/Weblogic/admin/certs/Server49.cer -keystore  /appserver/Weblogic/admin/certs/Server49_trust.jks -storepass cat360pa

check:
keytool -list -v -keystore /appserver/Weblogic/admin/certs/Server49.jks -storepass cat360pa
keytool -list -v -keystore /appserver/Weblogic/admin/certs/Server49_trust.jks -storepass cat360pa
keytool -printcert -file  /appserver/Weblogic/admin/certs/Server49.cer -storepass cat360pa


***********

import managed server2's certs into admin server's trust store (no need to import admin server's certificate since it is already there - same host)

keytool -import -alias Server50 -file /tmp/Server50.cer -keystore /appserver/Weblogic/admin/certs/Server49_trust.jks –storepass cat360pa

***
do admin console changes
- in adminserver--> keystores tab, change keystores to custom identity and custom trust
- specify the path to identity and trust key stores
- in adminserver --> ssl tab, under identity, set "private key alias" to local server host name (Server49)

****

Configure Nodemanager for SSL communication between adminserver and nodemanager

Add these to nodemanager.properties file

KeyStores=CustomIdentityAndCustomTrust
CustomIdentityKeyStoreFileName=/appserver/Weblogic/admin/certs/r1cvap1050.jks
CustomIdentityKeyStorePassPhrase=cat360pa
CustomIdentityAlias=r1cvap1050
CustomIdentityPrivateKeyPassPhrase=cat360pa

CustomIdentityKeyStoreType=jks

Wednesday, October 15, 2014

VIP (Virtual IP) and Virtual Host

Virtual IP
A Virtual IP (VIP) maps one external IP address and one external port to a multiple number of possible IP addresses and ports. It can also translate an external port to a different internal port. VIP addresses map traffic received at one IP address to another address based on the destination port number in the TCP or UDP segment header. If you have only one public IP address available, and you want to host multiple servers, use a VIP. An MIP should be used when you have multiple public IP addresses, and you want to host a single server to a single public IP. A VIP is the equivalent of what many network engineers call port forwarding. For example:

An HTTP packet destined for 210.1.1.3:80 (that is, IP address 210.1.1.3 and port 80) might be mapped to a Web server at 192.168.1.10.
An FTP packet destined for 210.1.1.3:21 might be mapped to an FTP server at 192.168.1.20.
An SMTP packet destined for 210.1.1.3:25 might be mapped to a mail server at 192.168.1.30.

Virtual Host
Creating virtual host configurations on your Apache server does not magically cause DNS entries to be created for those host names. You must have the names in DNS, resolving to your IP address, or nobody else will be able to see your web site. You can put entries in your hosts file for local testing, but that will work only from the machine with those hosts entries.
Server configuration

 # Ensure that Apache listens on port 80  

 Listen 80 
 # Listen for virtual host requests on all IP addresses 
 NameVirtualHost *:80 
 <VirtualHost *:80> 
 DocumentRoot /www/example1 
 ServerName www.example.com 
 # Other directives here 
 </VirtualHost> 
 <VirtualHost *:80> 
 DocumentRoot /www/example2 
 ServerName www.example.org 
 # Other directives here 
 </VirtualHost> 

Monday, October 06, 2014

JSF 2 introduction

faces-config.xml contains
  • definition of Managed beans, 
  • defines navigation rules (map return conditions to results page), 
  • register validators, 
  • declare locates, 
  • inject bean properties, 

Sample

 <faces-config … version="2.2">
 <managed-bean>
 <managed-bean-name>messageHandler</managed-bean-name>
 <managed-bean-class>
 coreservlets.SimpleController2
 </managed-bean-class>
 <managed-bean-scope>request</managed-bean-scope>
 </managed-bean>
 <navigation-rule>
 <from-view-id>/starting-page.xhtml</from-view-id>
 <navigation-case>
 <from-outcome>return-value-1</from-outcome>
 <to-view-id>/result-page-1.xhtml</to-view-id>
 </navigation-case>
 <navigation-case>
 <from-outcome>return-value-2</from-outcome>
 <to-view-id>/result-page-2.xhtml</to-view-id>
 </navigation-case>

View

 <!DOCTYPE … >
 <html xmlns="http://www.w3.org/1999/xhtml"
 xmlns:h="http://xmlns.jcp.org/jsf/html">
 <h:head><title>JSF 2: Basic Navigation Rules</title>
 …
 </h:head>
 <h:body>
 …
 <h:form>
 Your message:
 <h:inputText value="#{simpleController.message}"/>
 <br/>
 <h:commandButton value="Show Results"
 action="#{simpleController.doNavigation}"/>
 </h:form>
 …
 </h:body></html>
 @ManagedBean
 public class SimpleController {
  private String message="";
 // getMessage and setMessage
 public String doNavigation() {
 if (message.trim().length() < 2) {
 return("too-short");
 } else {
 String[] results =
 { "page1", "page2", "page3" };
 return(RandomUtils.randomElement(results));
 }
 }
 }

Managed beans have following scope
  • request
  • application
  • session
  • flow
  • none
Other JSF info
  • web.xml declars extension of url *.jsf
  • You will enter URL as xyz.jsf but actual file name would be xyz.xhtml
  • view technology is facelets
  • JSF has integrated AJAX support (f:ajax tag) and can be thought of as an alternative to jQuery and Dojo etc
  • Event handling
  • built in capabilities for validation
  • Page templating
JSF life cycle
  1. Restore view 
  2. Apply request values; process events 
  3. Process validations; process events 
  4. Update model values; process events
  5. Invoke application; process events
  6. Render response

Friday, October 03, 2014

Typical JEE development process, stack

Agile development using SCRUM methodogy
In the Scrum method of agile software development, work is confined to a regular, repeatable work cycle, known as a sprint or iteration. In by-the-book Scrum, a sprint is 30 days long, but many teams prefer shorter sprints, such as one-week, two-week, or three-week sprints. But how long each sprint lasts is something for the team to decide, who must weigh the advantages or disadvantages of a longer or shorter sprint for their specific development environment. The important thing is that a sprint is a consistent duration.

During each sprint, a team creates a shippable product, no matter how basic that product is. Working within the boundaries of such an accelerated timeframe, the team would only be able to build the most essential functionality. However, placing an emphasis on working code motivates the Product Owner to prioritize a release’s most essential features, encourages developers to focus on short-term goals, and gives customers a tangible, empirically based view of progress. Because a release requires many sprints for satisfactory completion, each iteration of work builds on the previous. This is why Scrum is described as “iterative” and “incremental.”

Every sprint begins with the sprint planning meeting, in which the Product Owner and the team discuss which stories will be moved from the product backlog into the sprint backlog. It is the responsibility of the Product Owner to determine what work the team will do, while the team retains the autonomy to decide how the work gets done. Once the team commits to the work, the Product Owner cannot add more work, alter course mid-sprint, or micromanage.

During the sprint, teams check in at the daily Scrum meeting, also called the daily standup. This time-boxed meeting gives teams a chance to update project status, discuss solutions to challenges, and broadcast progress to the Product Owner (who may only observe or answer the team’s questions).

Just as every sprint begins with the sprint planning meeting, the sprint concludes with the sprint review meeting, in which the team presents its work to the Product Owner. During this meeting, the Product Owner determines if the team’s work has met its acceptance criteria. If a single criterion is not met, the work is rejected as incomplete. If it satisfies the established criteria, then the team is awarded the full number of points.

Because certain sprints are hugely successful and others less than ideal, a team also gathers at the end of each sprint to share what worked, what didn’t, and how processes could be improved. This meeting is called the sprint retrospective meeting.


  1. Architecture and design scenario
    1. architecture concepts
    2. architecture definition
    3. proof of concepts
  2. Setting up development environment
  3. data modelling
  4. analysis and design
    1. class diagrams - inheritance, association, aggregation, composition
    2. usecase diagrams
    3. deployment diagrams
  5. development - reusable components, continuous integration, build automation
  6. WebServices
    1. SOAP web services - stacks, security, 
    2. Restful - stacks, security
    3. Code first Vs contract(WSDL) first approaches
  7. xml binding
  8. Stateless session beans
  9. design patterns
  10. ORM - JPA - Hibernate
  11. Continuous integration (CI)
    1. Developers check out code into their private workspaces.
    2. When done, the commit changes to the repository.
    3. The CI server monitors the repository and checks out changes when they occur.
    4. The CI server builds the system and runs unit and integration tests.
    5. The CI server releases deployable artefacts for testing.
    6. The CI server assigns a build label to the version of the code it just built.
    7. The CI server informs the team of the successful build.
    8. If the build or tests fail, the CI server alerts the team.
    9. The team fix the issue at the earliest opportunity.
    10. Continue to continually integrate and test throughout the project.
    11. Tools used for CI
      1. Deployed jenkins.war in Tomcat
      2. Configure ANT and Maven in Jenkins
      3. Setup Jenkins jobs to build projects
      4. http://www.vogella.com/tutorials/Jenkins/article.html
  12. build automation

Obtain a thread dump and heap dump for Java process

Thread dump shows locks used by different threads.
Ways to obtain a thread dump
1. get pid using ps or jps
    jstack pid xyz.log --> thread dump
    jstack -l to print additional information about locks
2. kill -QUIT pid --> thread dump
    kill -3 2>&1
3. login to jconsole  and request a thread dump
4. jrcmd print_threads
5. Login to AdminConsole—>Server —> Monitoring —> Threads - See more at: http://middlewaremagic.com/weblogic/?p=823#sthash.5X2b9Ntr.dpuf

Heap Dump
1. jmap -dump:file=xyz.log PID
Use JMAP to dump heap and analyze using visualVM
2. JConsole to get HeapDump
3. use $JAVA_HOME/bin/jvisualvm to take a heap dump and analyze it. For remote systems, use vnc viewer or xserver to redirect console windows to local env

Thursday, October 02, 2014

Transaction management - Spring Framework

Local vs. Global Transactions
Local transactions are specific to a single transactional resource like a JDBC connection, whereas global transactions can span multiple transactional resources like transaction in a distributed system.

Local transaction management can be useful in a centralized computing environment where application components and resources are located at a single site, and transaction management only involves a local data manager running on a single machine. Local transactions are easier to be implemented.

Global transaction management is required in a distributed computing environment where all the resources are distributed across multiple systems. In such a case transaction management needs to be done both at local and global levels. A distributed or a global transaction is executed across multiple systems, and its execution requires coordination between the global transaction management system and all the local data managers of all the involved systems.

Programmatic transaction management: This means that you have manage the transaction with the help of programming. That gives you extreme flexibility, but it is difficult to maintain.

Declarative transaction management: This means you separate transaction management from the business code. You only use annotations or XML based configuration to manage the transactions.

A transaction attribute may have one of the following values:
Required
RequiresNew
Mandatory
NotSupported
Supports
Never

Transaction Isolation level
DEFAULT This is the default isolation level.
READ_COMMITTED Indicates that dirty reads are prevented; non-repeatable reads and phantom reads can occur.
READ_UNCOMMITTED Indicates that dirty reads, non-repeatable reads and phantom reads can occur.
REPEATABLE_READ Indicates that dirty reads and non-repeatable reads are prevented; phantom reads can occur.
SERIALIZABLE Indicates that dirty reads, non-repeatable reads and phantom reads are prevented.

Spring.xml for hibernate
<beans xmlns="http://www.springframework.org/schema/beans"

    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
    xmlns:tx="http://www.springframework.org/schema/tx"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-4.0.xsd
        http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-4.0.xsd">

    <!-- Enable Annotation based Declarative Transaction Management -->
    <tx:annotation-driven proxy-target-class="true"
        transaction-manager="transactionManager" />

    <!-- Creating TransactionManager Bean, since JDBC we are creating of type
        DataSourceTransactionManager -->
    <bean id="transactionManager"
        class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
        <property name="dataSource" ref="dataSource" />
    </bean>
   
    <!-- MySQL DB DataSource -->
    <bean id="dataSource"
        class="org.springframework.jdbc.datasource.DriverManagerDataSource">

        <property name="driverClassName" value="com.mysql.jdbc.Driver" />
        <property name="url" value="jdbc:mysql://localhost:3306/TestDB" />
        <property name="username" value="pankaj" />
        <property name="password" value="pankaj123" />
    </bean>

    <bean id="customerDAO" class="com.journaldev.spring.jdbc.dao.CustomerDAOImpl">
        <property name="dataSource" ref="dataSource"></property>
    </bean>

    <bean id="customerManager" class="com.journaldev.spring.jdbc.service.CustomerManagerImpl">
        <property name="customerDAO" ref="customerDAO"></property>
    </bean>

</beans>



import org.springframework.transaction.annotation.Transactional;

import com.journaldev.spring.jdbc.dao.CustomerDAO;
import com.journaldev.spring.jdbc.model.Customer;

public class CustomerManagerImpl implements CustomerManager {

    private CustomerDAO customerDAO;

    public void setCustomerDAO(CustomerDAO customerDAO) {
        this.customerDAO = customerDAO;
    }

    @Override
    @Transactional
    public void createCustomer(Customer cust) {
        customerDAO.create(cust);
    }

}



Spring transactions client driver

package com.journaldev.spring.jdbc.main;

import org.springframework.context.support.ClassPathXmlApplicationContext;

import com.journaldev.spring.jdbc.model.Address;
import com.journaldev.spring.jdbc.model.Customer;
import com.journaldev.spring.jdbc.service.CustomerManager;
import com.journaldev.spring.jdbc.service.CustomerManagerImpl;

public class TransactionManagerMain {

    public static void main(String[] args) {
        ClassPathXmlApplicationContext ctx = new ClassPathXmlApplicationContext(
                "spring.xml");

        CustomerManager customerManager = ctx.getBean("customerManager",
                CustomerManagerImpl.class);

        Customer cust = createDummyCustomer();
        customerManager.createCustomer(cust);

        ctx.close();
    }

    private static Customer createDummyCustomer() {
        Customer customer = new Customer();
        customer.setId(2);
        customer.setName("Pankaj");
        Address address = new Address();
        address.setId(2);
        address.setCountry("India");
        // setting value more than 20 chars, so that SQLException occurs
        address.setAddress("Albany Dr, San Jose, CA 95129");
        customer.setAddress(address);
        return customer;
    }

}

Annotation style transaction
To use the annotation style transaction management all you have to do is to add a 3 simple bean configuration in your xml file i.e:

<context:annotation-config/>: Tells Spring framework to read @Transactional annotation

<tx:annotation-driven/>: Automatically adds transaction support which eventually wraps your code in transaction scope
Initializing DataSourceTransactionManager bean


Example:
@Transactional
public class AnnotatedUserDao implements IUserDao {

Example for read only:
@Transactional(readOnly = true)
public User selectUser(int uid) {

Programmatic transactions in Spring
public void deleteUser(final int uid) {
 DefaultTransactionDefinition paramTransactionDefinition = new    DefaultTransactionDefinition();

  TransactionStatus status=platformTransactionManager.getTransaction(paramTransactionDefinition );
try{
  String delQuery = "delete from users where id = ?";
  jdbcTemplate.update(delQuery, new Object[]{uid});
  platformTransactionManager.commit(status);
}catch (Exception e) {
  platformTransactionManager.rollback(status);
}



Wednesday, October 01, 2014

JRockit JVM Garbage Collection

Following are 3 GC algorithms available in JRockit
  1. throughput(Transactions per second): Optimizes for maximum throughput -- default
    1. moves most work to GC pauses
    2. Least GC overhead
    3. Application threads do as little as possible
  2. pausetime: Optimizes for short and even pause times
    1. move work out of GC pauses
    2. More overall GC overhead
    3. Application threads do more work
  3. deterministic: Optimizes for very short and deterministic pause times (requires Oracle JRockit Real Time
JRockit mission control
  1. monitor health and performance of JVM in production
  2. Performance tuning, diagnostics
  3. Profiling - detect hot methods

JRockit flight recorder
  1. Start JVM with recorder options to dump to a .jfr file
  2. Use mission control GUI to analyze the file


Java Application Tuning

Tuning objectives

Tools used
Jprofiler
JRockit mission control
Wily introscope
Fluke
Splunk
Wireshark

testing - load runner, winruuner, junit, jmeter

Measurement
CPU (ideally should not cross more than 50-75%)
Memory
IO usage
Response time
Latency
Throughput

Application level tuning

Java language tuning
Using hashmaps vs hashtable, array vs vector,
Use StringBuffer to Concatenate Strings
Assign null to Variables That Are No Longer Needed
Declare Constants as static final
Avoid Finalizers - Adding finalizers to code makes the garbage collector more expensive and unpredictable
Synchronize Only When Necessary

EJB

JNDI

Application Server
Precompile jsps

OS
File Descriptors
set rlim_fd_max = 8192
Verify this hard limit by using the following command:
ulimit -a -H
Once the above hard limit is set, increase the value of this property explicitly (up to this limit)using the following command:
ulimit -n 8192
Verify this limit by using the following command:
ulimit –a

DISK IO settings

TCP/IP settings

Virtual memory settings

Run TOP command in Linux to check CPU usage.
Run VMSTAT, SAR, PRSTAT command to get more information on CPU, memory usage and possible blocking.
Enable the trace file before running your queries,then check the trace file using tkprof create output file.
According to explain plan check the elapsed time for each query,then tune them respectively.

What is the use of iostat/vmstat/netstat command in Linux?
Iostat – reports on terminal, disk and tape I/O activity.
Vmstat – reports on virtual memory statistics for processes, disk, tape and CPU activity.
Netstat – reports on the contents of network data structures.

Caching

JVM tuning/Garbage collection
best GC algorithm, especially for web server applications, is the Concurrent Mark Sweep GC
operation mode to server

-Xms, -Xmx
-XX:PermSize=256 m -XX:MaxPermSize=256m --> this option is not there in new HotSpot JVM
Log GC -Xloggc:$CATALINA_BASE/logs/gc.log -XX:+PrintGCDetails
-XX:+PrintGCDateStamps
HeapDump on Out of memory - -Xloggc:$CATALINA_BASE/logs/gc.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps

jvm - space ratios,
gc type – CMS (concurrent mark and sweep, parallel), G1 (this is new in HotSpot)

In the event CPU usage rate is high - If TPS is low but CPU usage rate is high, this is likely to result from inefficient implementation. In this case, you should find out the location of bottlenecks by using a profiler. You can analyze this by using jvisuavm, TPTP of Eclipse or JProbe.
     
JMS tuning
quota - space for jms messages at queue or jms server level. if quota limit is reached, exception is raised
send timeout - blocking producers when quota limits are reached
paging messages
message buffer size - amount of memory stored in memory before messages are paged out
message compression
expired messages

JDBC code tuning
Use prepared statements. Use parametrized SQL.
Tune the SQL to minimize the data returned (e.g. not 'SELECT *').
Minimize transaction conflicts (e.g. locked rows).
Use connection pooling.
Try to combine queries and batch updates.
Use stored procedures.
Cache data to avoid repeated queries.
Close resources (Connections, Statements, ResultSets) when finished with.
Select the fastest JDBC driver.

JDBC parameter tuning
Increasing Performance with the Statement Cache
Connection Testing Options for a Data Source - test frequency/
Minimized Connection Test Delay After Database Connectivity Loss
Minimized Connection Request Delay After Connection Test Failures
Minimized Connection Request Delays After Loss of DBMS Connectivity
Minimizing Connection Request Delay with Seconds to Trust an Idle Pool Connection
A leaked connection is a connection that was not properly returned to the connection pool in the data source. To automatically recover leaked connections, you can specify a value for Inactive Connection Timeout on the JDBC Data Source

Threadpool/work manager
Contention/locks
Minimize I/O
Parellelization/concurrency

Hibernate tuning
property name="show_sql"

Tip 1 - Reduce primary key generation overhead
Tip 2 - Use JDBC batch inserts/updates
Tip 3 - Periodically flush and clear the Hibernate session
entityManager.flush();
entityManager.clear();

Tip 4 - Reduce Hibernate dirty-checking overhead
@Transactional(readOnly=true)
public void someBusinessMethod() {
}

Tip 5 - Search for 'bad' query plans - full table scans and full cartesian joins
Tip 6 - Use the second-level and query caches
Define lazy loading as the preferred association loading strategy,
Set ReadOnly to "true" on Queries and Criteria, when objects returned will never be modified.

Cache – 1st level = Session, 2nd = Hibernate, 3rd = Query results cache

SQL Tuning
How to see execution plan for an SQL
SQL> set autotrace onSQL> select count(*) from table_name;
SQL> explain plan set statement_id = '111' for select * from tableName;SQL> select * from plan_table;

How to find out cpu time for an SQL?
SQL> TIMING START select_empSQL> SELECT * FROM employee ; SQL> TIMING SHOW select_emp timing for: select_emp real: 1760
use indexes, improve h/w, caching, tune it, hints,
use explain plan or sql analyzer to see the execution path
add hints
rewrite sql
replace sql with pl/sql

How would you improve a poor performing query?

How do you tune a sql statement?

understand row level lock and how it migrates to table level lock

soa performance tuning - dehydrate, db persistence
http://docs.oracle.com/cd/E25178_01/core.1111/e10108/bpel.htm

Oracle HTTP server tuning
http://docs.oracle.com/cd/E25178_01/core.1111/e10108/http.htm#ASPER99006

Java Memory leaks

Causes of memory leaks in Java
The four typical causes of memory leaks in a Java program are:
  1. Unknown or unwanted object references: These objects are no longer needed, but the garbage collector can not reclaim the memory because another object still refers to it.
  1. Long-living (static) objects: These objects stay in the memory for the application's full lifetime. Objects tagged to the session may also have the same lifetime as the session, which is created per user and remains until the user logs out of the application. Example: Listeners, Static Collection classes, Connections, JNI
  1. Failure to clean up or free native system resources: Native system resources are resources allocated by a function external to Java, typically native code written in C or C++. Java Native Interface (JNI) APIs are used to embed native libraries/code into Java code.
  1. Bugs in the JDK or third-party libraries: Bugs in various versions of the JDK or in the Abstract Window Toolkit and Swing packages can cause memory leaks.

Symptoms
  • Causes Out of memory after some time
  • GC keeps reclaiming lesser heap each time (memory used for long lived objects increase over time)
  • Causes frequent full GC
  • Available free heap decreases each time
  • Response time decreases

Detection of  Memory leaks
  • Add JVM args to collect GC metrics --> verbose:gc,-XX:+PrintGCTimeStamps,-XX:+PrintGCDetails
  • Monitor heap. If heap size increases after each full GC, this means a memory leak.
  • Use HeapDumpOnOutOfMemoryError to dump heal to a file
  • Use JMAP to dump heap and analyze using visualVM
    • jmap -dump:file=xyz.log PID
  • Use profilers

Other technique to detect memory leak
So Java Memory Leaks occur when objects still have a GC root reference, but are not actually used anymore. Those “Loitering Objects” stay around for the whole life of the JVM. If the application is creating those “dead objects” on and on, the memory will be filled up and eventually result in a java.lang.OutOfMemoryError. Typical causes are static collections, which are used as a kind of cache. Usually objects are added, but never removed (Let’s face it: How often have you used add() and put() and how often used remove() methods?). Because the objects are referenced by the static collection, they cannot be freed up anymore, as the collection has a GC root reference via the classloader.

Examples of memory leaks
Not closing resultset and statement objects explicitly…

JVM Performance Tuning

http://www.petefreitag.com/articles/gctuning/
http://java.sun.com/docs/hotspot/gc1.4.2/faq.html
http://java.sun.com/developer/technicalArticles/Programming/turbo/

The garbage collector first performs a task called marking. The garbage collector traverses the application graph, starting with the root objects; those are objects that are represented by all active stack frames and all the static variables loaded into the system. Each object the garbage collector meets is marked as being used, and will not be deleted in the sweeping stage.

The sweeping stage is where the deletion of objects take place. There are many ways to delete an object: The traditional C way was to mark the space as free, and let the allocator methods use complex data structures to search the memory for the required free space. This was later improved by providing a defragmenting system which compacted memory by moving objects closer to each other, removing any fragments of free space and therefore allowing allocation to be much faster:

For the last trick to be possible a new idea was introduced in garbage collected languages: even though objects are represented by references, much like in C, they don't really reference their real memory location. Instead, they refer to a location in a dictionary which keeps track of where the object is at any moment.

Fortunately for us – but unfortunately for these garbage collection algorithms – our servers and personal computers got faster (and multiple) processors and bigger memory capacities. Compacting memory areas this large often was very taxing on the application, especially considering that when doing that, the whole application had to freeze due to the changes in the virtual memory map. Fortunately for us though, some smart people improved those algorithms in three ways: concurrency, parallelization and generational collection.


Generational garbage collection

In any application, objects could be categorized according to their life-line. Some objects are short-lived, such as most local variables, and some are long-lived such as the backbone of the application. The thought about generational garbage collection was made possible with the understanding that in an application's lifetime, most instantiated objects are short-lived, and that there are few connections between long-lived objects to short-lived objects.

In order to take advantage of this information, the memory space is divided to two sections: young generation and old generation. In Java, the long-lived objects are further divided again to permanent objects and old generation objects. Permanent objects are usually objects the Java VM itself created for caching like code, reflection information etc. Old generation objects are objects that survived a few collections in the young generation area.

Since we know that objects in the young generation memory space become garbage early, we collect that area frequently while leaving the old generation's memory space to be collected in larger intervals. The young generation memory space is much smaller, thus having shorter collection times.

An additional advantage to the knowledge that objects die quickly in this area, we can also skip the compacting step and do something else called copying. This means that instead of seeking free areas (by seeking the areas marked as unused after the marking step), we copy the live objects from one young generation area to another young generation area. The originating area is called the From area, and the target area is called the To area, and after the copying is completed the roles switch: the From becomes the To, and the To becomes the From.

In addition, the Java VM splits the young generation to three areas, by adding an area called Eden which is where all objects are allocated into. To my understanding this is done to make allocation faster by always having the allocator reference to the beginning of Eden after a collection.

By using the copying method, garbage collection achieves defragmentation without seeking for dead memory blocks. However, this method proves itself to be more efficient in areas where most objects are garbage, so it is not a good approach to take on the old generation memory area. Indeed, that area is still collected using the compacting algorithm – but now, thanks to the separation of young and old generations, it is done in much larger intervals.


OutOfMemoryError (on HeapSpace, PermGen) and JVM Performance Tuning
HeapSpace OutOfMemoryError:


"Exception in thread "main" java.lang.OutOfMemoryError: Java heap space"

Most of you would have encountered this error before. This means JVM's usage of heap space exceeded what you specified in -Xmx vm argument or the default max heap size (if you did not specify one).

When you get this error, before you increase your -Xmx value, make sure to profile your application and see whether the usage is justified or if there is any memory leak. See below section on profiling tools.

PermGen OutOfMemoryError:

"java.lang.OutOfMemoryError: PermGen space"

Sometimes, you would also get a different kind of out of memory error. In JVM heap, there are different generations - Young, Tenured and PermGen. Young and tenured generations are where the regular objects from your application are stored. Size of young + tenured generation is controlled by -Xmx variable you saw above. PermGen is where JVM stores string pool, metadata about your application's classes etc.,

This error would normally happen only if your application loads a large number of classes, but is not unloading for some reason, which could be either legitimate or a memory leak. Also, make sure to check if your application heavily uses String.intern(), which could also contribute to this error.

Thursday, September 04, 2014

Java 7/JEE 6 features

  • Swing
  • IO and New IO
    • The java.nio.file package and its related package, java.nio.file.attribute, provide comprehensive support for file I/O and for accessing the file system. A zip file system provider is also available in JDK 7. The following resources provide more information:
    • File I/O (featuring NIO 2.0) in the Java Tutorials; NIO stands for non-blocking I/O
    • Developing a Custom File System Provider
    • Zip File System Provider
  • Networking
    • The URLClassLoader.close method has been added. This method effectively eliminates the problem of how to support updated implementations of the classes and resources loaded from a particular codebase, and in particular from JAR files
  • Security
  • Concurrency Utilities
  • Rich Internet Applications (RIA)/Deployment
  • Requesting and Customizing Applet Decoration in Dragg able Applets
  • Embedding JNLP File in Applet Tag
  • Deploying without Codebase
  • Handling Applet Initialization Status with Event Handlers
  • Java 2D
  • Java XML - JAXP, JAXB, and JAX-WS
  • Internationalization
  • java.lang Package -Multithreaded Custom Class Loaders in Java SE 7
  • JDBC 4.1 introduces the following features:
    • The ability to use a try-with-resources statement to automatically close resources of type Connection, ResultSet, and Statement
    • RowSet 1.1: The introduction of the RowSetFactory interface and the RowSetProvider class, which enable you to create all types of row sets supported by your JDBC driver.
  • Binary Literals - In Java SE 7, the integral types (byte, short, int, and long) can also be expressed using the binary number system. To specify a binary literal, add the prefix 0b or 0B to the number.
  • Underscores in Numeric Literals - Any number of underscore characters (_) can appear anywhere between digits in a numerical literal. This feature enables you, for example, to separate groups of digits in numeric literals, which can improve the readability of your code.
  • Strings in switch Statements - You can use the String class in the expression of a switch statement.
  • Type Inference for Generic Instance Creation - You can replace the type arguments required to invoke the constructor of a generic class with an empty set of type parameters (<>) as long as the compiler can infer the type arguments from the context. This pair of angle brackets is informally called the diamond.
  • Improved Compiler Warnings and Errors When Using Non-Reifiable Formal Parameters with Varargs Methods - The Java SE 7 complier generates a warning at the declaration site of a varargs method or constructor with a non-reifiable varargs formal parameter. Java SE 7 introduces the compiler option -Xlint:varargs and the annotations @SafeVarargs and @SuppressWarnings({"unchecked", "varargs"}) to supress these warnings.
  • The try-with-resources Statement - The try-with-resources statement is a try statement that declares one or more resources. A resource is an object that must be closed after the program is finished with it. The try-with-resources statement ensures that each resource is closed at the end of the statement. Any object that implements the new java.lang.AutoCloseable interface or the java.io.Closeable interface can be used as a resource. The classes java.io.InputStream, OutputStream, Reader, Writer, java.sql.Connection, Statement, and ResultSet have been retrofitted to implement the AutoCloseable interface and can all be used as resources in a try-with-resources statement.
  • Catching Multiple Exception Types and Rethrowing Exceptions with Improved Type Checking - A single catch block can handle more than one type of exception. In addition, the compiler performs more precise analysis of rethrown exceptions than earlier releases of Java SE. This enables you to specify more specific exception types in the throws clause of a method declaration.

Deprecated features in Java 7

Deprecated features in Java EE 6

  • Java API for RESTful Web Services (JAX-RS) JSR 311
  • Contexts and Dependency Injection for the Java EE Platform (CDI) JSR 299
  •  JSR 330: Dependency Injection for Java
  • Bean Validation  JSR 303
  • Enhanced Web Tier Capabilities
    •  web fragments and shared framework pluggability
    •  Servlet 3.0, JSR 315 -  asynchronous processing and support for annotations.
    • JSF 2.0, JSR 314 (facelets)
    • Support for Ajax in JSF 2.0
  •  JSR 318: Enterprise JavaBeans 3.1
    • No-interface view
    • Singletons.
    • Asynchronous session bean invocation.
    • Simplified Packaging
    • EJB Lite
    • embeddable API and container for use in the Java SE environment
    • , JSR 317: Java Persistence 2.0.
  • Profiles and Pruning
Spring Vs JEE 6

CDI Info

Components bound to lifecycle contexts
Webtier to enterprise tier wiring
CDI brings transactional support to web tier
CDI introduces the concept of managed beans
Annotations to define scope, qualifier, transactions, security, pooling
@inject, @default, @alternative, @named

@TransactionalAttribute, @RolesAllowed, @sessionScoped, @Qualifier


Interceptors

Monday, August 25, 2014

Oracle PL/SQL functions and procedures

PL/SQL Function

DECLARE
   a number;
   b number;
   c number;
FUNCTION findMax(x IN number, y IN number)
RETURN number
IS
    z number;
BEGIN
   IF x > y THEN
      z:= x;
   ELSE
      Z:= y;
   END IF;

   RETURN z;
EXCEPTION
   WHEN NO_DATA_FOUND THEN
      DBMS_OUTPUT.PUT_LINE('No such employee: ' || Emp_number);

END;

Calling a function from another function/procedure
BEGIN
   a:= 23;
   b:= 45;
   c := findMax(a, b);
   dbms_output.put_line(' Maximum of (23,45): ' || c);

END;

PL/SQL Procedure

DECLARE
   a number;
   b number;
   c number;

PROCEDURE findMin(x IN number, y IN number, z OUT number) IS
BEGIN
   IF x < y THEN
      z:= x;
   ELSE
      z:= y;
   END IF;
EXCEPTION
   WHEN NO_DATA_FOUND THEN
      DBMS_OUTPUT.PUT_LINE('No such employee: ' || Emp_number);
END;

Calling a procedure
  1. From the SQL prompt.
 EXECUTE [or EXEC] procedure_name;
  1. Within another procedure – simply use the procedure name.
 procedure_name;

Friday, August 22, 2014

Rule Engine

Drools - steps to use it in a typical program
  1. create working memory
  2. asset objects (insert into working memory)
  3. fire all rules
  4. retrieve objects
Rule attributes
- name
- group
- description
- priority

Rules
  1. Production rules (inference rules) - if x, then y
  2. Reaction Riles - wait for a set of events
  3. Stateful
Rete algorithm (Forward Chaining)
  1. iterate through antecedants
  2. each time an antecedant is matched, add knowledge of consequence
  3. do this until goal is reached
Guvnor - Business rules management system

RESTful WebServices - API Keys


Http methods
Get (read), header, post(create), put(update), trace, delete(delete)

API keys are used for
  • Limit API usage, security

API flow sequence
  • Log in to PayPal developer site - register your application by logging into the PayPal Developer site using a PayPal account, and by going to the Applications tab. 
  • PayPal provides a client_id and secret - You will be issued a set of test credentials (‘client_id’ and ‘secret’) that you can use to authenticate your API calls using the OAuth 2.0 protocol.
  • Client calls /token endpoint with client_id and secret_key - You then obtain an access token for your application by sending a request to the ‘/v1/oauth2/token’ endpoint. You need to authenticate your access token request (using HTTP Basic Auth) with your application credentials (client_id and secret_key) obtained as described above. The ‘client_id’ and ‘secret’ becomes your user-id and password in HTTP Basic Auth.  If you’re using cURL, you can pass the client_id and secret as -u ":"
  • PayPal returns the access token - PayPal, acting as the “authorization server”, verifies your application credentials and returns an access token. The specific kind of access token that PayPal provides is a “Bearer Token”. PayPal also provides the token type in the response, which indicates the type as Bearer.
  • Client calls PayPal Rest API with access token - When you make the API calls, make request by adding the access token in the ‘Authorization’ header using the following syntax (as defined in the OAuth 2.0 protocol):
Authorization: {tokenType} {accessToken}
    Example: Authorization: Bearer EEwJ6tF9x5...4599F

    • Access token validity and expiration - PayPal-issued access tokens can be used to access all the REST API endpoints. These tokens have a finite lifetime and you must write code to detect when an access token expires. You can do this either by keeping track of the ‘expires_in’ value returned in the response from the token request (the value is expressed in seconds), or handle the error response (401 Unauthorized) from the API endpoint when an expired token is detected.
    link: https://developer.paypal.com/docs/integration/direct/paypal-oauth2/

    Common Web Security vulnerabilities

    1. Filter input escape output
    2. Sql injection (user input is incorrectly filtered for special chars)
    3. Cross site scripting (code injection by malicious to pages used by real users)

    Wednesday, August 06, 2014

    Configuring SSL offloading (LB-OHS-WL)

    LB (F5-SSL) to WL(http) plugin to WL cluster(http)

    • Sticky sessions is default
    • SSL ends at LB
    • In F5, I needed to configure a header to be passed with the requests called WL-Proxy-SSL and set the value to true (WL-Proxy-SSL: true)
    • Cockie name goes in the plugin properties (JSESSIONID by default) and deployment descriptor
    • WLProxySSLPassThrough should be set to ON, so that the OHS proxy/plug-in will pass the WL-Proxy-SSL header on to WebLogic Server
    • Configure the Adminserver so that it would acknowledge the proxy plugin headers.  This field is titled "WebLogic - Plug-In Enabled" and can be found on the page Configuration->General in the Advanced section

    Tuesday, July 15, 2014

    SOAP WebServices

    SOAP
    • port type is interface
    • binding is implementation

    Monday, May 19, 2014

    J2EE architecture notes

    Designer -- thinks about functional requirements
    Architect - thinks about non-functional requirements

    Architecture
    1. software elements
    2. relationship among elements
    3. external visible properties of these elements

    non-functional requirements
    1. constraints (financial, processing )
    2. systemic quality (*bility)

    Goals of architecture
    1. resolve issues related to non-functional requires
    2. quality
    3. reduce risks, document them
    4. facilitate design
    5. document why certain decisions are made
    6. governance, policy making, best practice

    Architecture workflow
    1. select an arch type of the system (tiers, layers)
    2. Create a detailed deploy diagram for arch significant usecases
    3. refine arch model to satisfy non functional requirements
    4. Create and test arch baseline
    5. Document tech choices
    6. Create a arch template from the final deployment diagram

    Tuesday, January 28, 2014

    Gym schedule

    same as other scheduling application for small business
    - shows

    priests small business app



    1. schedule events/appointments, integrated with calendar
    2. lists supplies needed

    Tuesday, December 24, 2013

    Git workflow commands

    Pull from your remote repository to make sure everything is up to date
      git pull origin master
    

    Create a new local branch for keeping your changes way from your local master branch
      git branch my_new_feature
    

    Switch to that branch and start working
      git checkout my_new_feature
    

    After finishing work and running successfully any cukes/specs/tests, commit
      git commit -am "Implemented my new super duper feature"
    

    Then, switch back to local master and pull if you need to also merge any changes since you first pulled
      git checkout master
      git pull origin master
    

    Merge the local feature branch to master and run any cukes/specs/tests and if everything passes push changes
      git merge my_new_feature
      git push origin master
    

    This is my preference: I delete the temporary local branch when everything is merged and pushed
      git branch -d my_new_feature
    
    
    reference(copied from): http://amiridis.net/posts/13 

    Monday, October 28, 2013

    Coherence 3.6 FAQ

    coherence Command line console is also a full member of the cluster
    If you want to just query, not to join the cluster as a member, then use coherence-extend or 3.7 has REST API client

    if request efects more than 20% of cluter members, unicast is used Otherwise multicasr is used.
    ACcknowledgement is always unicast.
    TCMP = unicast + multicast
    unicast is based on destination address
    multicasr is no destination, broadcast

    two requirements for objects to be put in cache:
    java beans
    some form of serialization

    3 args should match for all cluster members
    -Dtangosol.coherence.ttl=0
    -Dtangosol.coherence.cluster=XYZCluster
    -Dtangosol.coherence.clusterport=7009

    local_storage = false means, the member joins the cluster as a non-storage memeber. It does not store any data but stil can put and get data..

    Implement PortableObject interface for cross-platform communication

    I got class sizes of 339, 93 and 75 for default, ExternalizableLite and PortableObject.

    LocalCache
    - No fault tolerance
    - exists along with application heap
    - instant read and writes


    replicated cache
    - writes are a problem because cache.put() returns only when all caches are synced with the write (put is blocking or synchronous call)
    - provides zero latency reads
    - since cache is replicated, it provides high availability
    - suited for read only or read mostly (data loaded during initialization)

    partitioned cache
    - data is assigned to certain buckets and these buckets are assigned to each partition
    - synchronously maintains backup of partitions on other machines
    - we always know the mapping between data key and partition owner so reads are always 1 network hop, writes
    are 2 (one for write and one for backup write)
    - number of partitions are always prime number

    Near cache
    -Holds frequently used data from partition
    -If some other member updates data, it is possible to update it in near cache by using synchronization strategy
    -if the local client asks for a data item existing in near cache that changed on remote partition, remote partition will first remove the data item from near cache to invalidate it. Then, this data item is read from remote partition.
    - Use to defined above 

    Sunday, September 29, 2013

    shopping list mobile application

    weekly groceries tracking app

    - one person in the family prepares a grocery list
    - he/she prepares the list by browsing through different categories
    - recently bought/frequently bought items bubbles to top
    - available items shows up left had side/picked items on right hand side
    - users drags items from left to right
    - user can see last bought date, qty, price, store etc.
    - user presses done
    - other family members can share the list


    Saturday, September 28, 2013

    SecureSocial Database persistence service for users and tokens

    /**

     * Copyright 2012 Jorge Aliss (jaliss at gmail dot com) - twitter: @jaliss
     *
     * Licensed under the Apache License, Version 2.0 (the "License");
     * you may not use this file except in compliance with the License.
     * You may obtain a copy of the License at
     *
     *     http://www.apache.org/licenses/LICENSE-2.0
     *
     * Unless required by applicable law or agreed to in writing, software
     * distributed under the License is distributed on an "AS IS" BASIS,
     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     * See the License for the specific language governing permissions and
     * limitations under the License.
     *
     */
    package service;

    import play.Application;
    import play.Logger;
    import scala.Option;
    import scala.Some;
    import securesocial.core.AuthenticationMethod;
    import securesocial.core.Identity;
    import securesocial.core.IdentityId;
    import securesocial.core.PasswordInfo;
    import securesocial.core.SocialUser;
    import securesocial.core.java.BaseUserService;

    import securesocial.core.java.Token;

    import java.net.UnknownHostException;

    import org.joda.time.DateTime;

    import com.mongodb.BasicDBObject;
    import com.mongodb.DB;
    import com.mongodb.DBCollection;
    import com.mongodb.DBCursor;
    import com.mongodb.DBObject;
    import com.mongodb.MongoClient;

    /**
     * A Sample In Memory user service in Java
     *
     * Note: This is NOT suitable for a production environment and is provided only as a guide.
     * A real implementation would persist things in a database
     */
    public class CopyOfInMemoryUserService extends BaseUserService {

        private static final String IS_SIGN_UP = "isSignUp";
        private static final String BCRYPT = "bcrypt";
        private static final String PASSWORD = "password";
        private static final String AUTH_METHOD = "authMethod";
        private static final String LAST_NAME = "lastName";
        private static final String FIRST_NAME = "firstName";
        private static final String USER_ID = "userId";
        private static final String PROVIDER_ID = "providerId";
        private static final String USERS = "users";
        private static final String EXPIRATION_TIME = "expirationTime";
        private static final String CREATION_TIME = "creationTime";
        private static final String EMAIL = "email";
        private static final String UUID = "uuid";
        private static final String USER_TOKENS = "userTokens";
     
     
        public CopyOfInMemoryUserService(Application application) {
            super(application);
        }

        @Override
        public Identity doSave(Identity user) {
            Logger.debug("doSave(user)***"+user);
            // this sample returns the same user object, but you could return an instance of your own class
            // here as long as it implements the Identity interface. This will allow you to use your own class in the
            // protected actions and event callbacks. The same goes for the doFind(UserId userId) method.
            DB db = getMongoClient();          
            DBCollection userCollection = db.getCollection(USERS);
            BasicDBObject doc = new BasicDBObject(USER_ID, user.identityId().userId())
                                .append("userId:providerId", user.identityId().userId() + ":" + user.identityId().providerId())
                                .append(PROVIDER_ID, user.identityId().providerId())
                                .append(AUTH_METHOD,user.authMethod().method())
                                //.append("avatarUrl",user.avatarUrl().get())
                                .append(EMAIL,user.email().get())
                                .append(FIRST_NAME,user.firstName())
                                .append(LAST_NAME,user.lastName())
                                .append("fullName",user.fullName())
                                .append(PASSWORD,user.passwordInfo().get().password());

            Logger.debug("saving user:"+doc);
            userCollection.insert(doc);
         
            return user;
        }

        @Override
        public void doSave(Token token) {
            Logger.debug("***doSave(token):"+token);
            DB db = getMongoClient();          
            DBCollection userCollection = db.getCollection(USER_TOKENS);
         
            BasicDBObject doc = new BasicDBObject(UUID, token.getUuid())
                                .append(EMAIL, token.getEmail())
                                .append(IS_SIGN_UP, token.getIsSignUp())
                                .append(CREATION_TIME, Long.toString(token.getCreationTime().toDate().getTime()))
                                .append(EXPIRATION_TIME, Long.toString(token.getExpirationTime().toDate().getTime()));
            Logger.debug("Saving token:" + doc);
            userCollection.insert(doc);
        }

        @Override
        public Identity doFind(IdentityId identityId) {
            Logger.debug("****doFind(identityId):"+identityId);
            DB db = getMongoClient();          
            DBCollection userCollection = db.getCollection(USERS);
            BasicDBObject query = new BasicDBObject("userId:providerId", identityId.userId() + ":" + identityId.providerId());
            DBCursor cursor = userCollection.find(query);
         
            Identity identity = null;
            if( cursor.hasNext() ) {
                DBObject dbUser = cursor.next();
                Logger.debug("Found user (with identityId):"+dbUser);
                identity = new SocialUser(identityId,        
                        dbUser.get(FIRST_NAME).toString(),
                        dbUser.get(LAST_NAME).toString(),
                        dbUser.get(FIRST_NAME).toString() + " " + dbUser.get(LAST_NAME).toString(),
                        Option.apply(dbUser.get(EMAIL).toString()),
                        null,
                        new AuthenticationMethod( dbUser.get(AUTH_METHOD).toString() ),
                        null,
                        null,
                        Some.apply(new PasswordInfo(BCRYPT, dbUser.get(PASSWORD).toString(), null))
                    );
             
            }
            return identity;
        }

        @Override
        public Token doFindToken(String tokenId) {
            Logger.debug("doFindToken(tokenId):"+tokenId);
            DB db = getMongoClient();          
            DBCollection userCollection = db.getCollection(USER_TOKENS);
            BasicDBObject query = new BasicDBObject(UUID, tokenId);
            DBCursor cursor = userCollection.find(query);
         
            Token token = null;
            if( cursor.hasNext() ) {
                token = new Token();
                DBObject dbToken = cursor.next();
                Logger.debug("Found token with tokenId:"+dbToken);
             
                token.setUuid(dbToken.get(UUID).toString());
                token.setEmail(dbToken.get(EMAIL).toString());
                token.setIsSignUp( new Boolean( dbToken.get(IS_SIGN_UP).toString() ) );      
                token.setCreationTime( new DateTime(new Long( dbToken.get(CREATION_TIME).toString() ) ));    
                token.setExpirationTime( new DateTime( new Long( dbToken.get(EXPIRATION_TIME).toString()) ));
                token.setIsSignUp( new Boolean(dbToken.get(IS_SIGN_UP).toString()));
            }      
            return token;
        }

        @Override
        public Identity doFindByEmailAndProvider(String email, String providerId) {
            Logger.debug("finding user with email:"+email + " and providerId:"+providerId);
         
            Identity result = null;
            DB db = getMongoClient();          
            DBCollection userCollection = db.getCollection(USERS);
            BasicDBObject query = new BasicDBObject(EMAIL, email).append(PROVIDER_ID, providerId);
            DBCursor cursor = userCollection.find(query);
         
            if( cursor.hasNext() ) {
                DBObject dbUser = cursor.next();
                Logger.debug("found user(with email and providerId:"+dbUser);
                if( dbUser != null ) {
                    IdentityId userId = new IdentityId(dbUser.get(USER_ID).toString(), providerId);
                    result = new SocialUser(userId ,        
                            dbUser.get(FIRST_NAME).toString(),
                            dbUser.get(LAST_NAME).toString(),
                            dbUser.get(FIRST_NAME).toString() + " " + dbUser.get(LAST_NAME).toString(),
                            Option.apply(dbUser.get(EMAIL).toString()),
                            null,
                            new AuthenticationMethod( dbUser.get(AUTH_METHOD).toString() ),
                            null,
                            null,
                            Some.apply(new PasswordInfo(BCRYPT, dbUser.get(PASSWORD).toString(), null))
                        );
                }
            }
            Logger.debug("found user with email and provider:"+result);
            return result;
        }

        @Override
        public void doDeleteToken(String uuid) {
            Logger.debug("********* doDeleteToken() called ****");
            DB db = getMongoClient();          
            DBCollection userCollection = db.getCollection(USER_TOKENS);
            BasicDBObject query = new BasicDBObject(UUID, uuid);
            DBCursor cursor = userCollection.find(query);
         
            if( cursor.hasNext() ) {
                DBObject dbToken = cursor.next();
                Logger.debug("Deleting token with uuid:"+uuid);
                userCollection.remove(dbToken);
            }      
        }

        @Override
        public void doDeleteExpiredTokens() {
            Logger.debug("***deleteExpiredTokens()");
            DB db = getMongoClient();          
            DBCollection userCollection = db.getCollection(USER_TOKENS);
            DBCursor cursor = userCollection.find();
         
            Token token = null;
            if( cursor.hasNext() ) {
                token = new Token();
                DBObject dbToken = cursor.next();

                Logger.debug("Got token:" + dbToken);
                token.setUuid(dbToken.get(UUID).toString());
                token.setEmail(dbToken.get(EMAIL).toString());

                DateTime d1 = new DateTime( new Long(dbToken.get(CREATION_TIME).toString()) );
                token.setCreationTime(d1);
             
                DateTime d2 = new DateTime( new Long(dbToken.get(EXPIRATION_TIME).toString()) );
                token.setExpirationTime(d2);        
                if( token.isExpired() ) {
                    Logger.debug("Expired, deleting token:"+dbToken);
                    userCollection.remove(dbToken);
                }
            }
        }
     
        private static DB getMongoClient() {
            MongoClient client = null;
            DB db = null;
            try {
                client = new MongoClient("localhost", 27017);

                db = client.getDB("UserDB");
            } catch(UnknownHostException e) {
                e.printStackTrace();
            }
         
            return db;
        }
    }