Java web developer's thoughts

Thursday, June 9, 2011

Remote logging in GWT in 3 easy steps

Steps to setup remote logging in GWT:

1) Add to app.gwt.xml:

<inherits name="com.google.gwt.logging.Logging"/>
<set-property name="gwt.logging.popupHandler" value="DISABLED"/>
<set-property name="gwt.logging.simpleRemoteHandler" value="ENABLED"/>

1st line adds logging module to the project
2nd line disables annoying logging popup
3rd line enables server side logging

2) Add to web.xml:

<servlet>
<servlet-name>remoteLoggingServlet</servlet-name>
<servlet-class>com.google.gwt.logging.server.RemoteLoggingServiceImpl</servlet-class>
</servlet>

<servlet-mapping>
<servlet-name>remoteLoggingServlet</servlet-name>
<url-pattern>/app/remote_logging</url-pattern>
</servlet-mapping>


3) Use standard JDK logging api like this:

import java.util.logging.Logger;
...
Logger log = Logger.getLogger(App.class.getName());

Friday, March 12, 2010

Awesomely fast key value store design

I've got interesting idea for awesomely fast key value store. 

Writing data:
  1. When data change request comes to Client API it stores changed data in Temp Storage. Also change request gets stored to Persistent Queue.
  2. Queue Processing Job gets queued update and applies it to data in persistent storage. 

Reading data:
  1. Read request comes to DB Client API , it checks if data for this request available in Temp Storage . If it is then use it, otherwise go to Persistent Store. 

This approach allows to write to distributed DB with speed of writing to persistent queue, which is usually way faster then DB updates. And read can be easily scaled by adding additional replicas.

Potential issues with this design. 
  1. Temp Storage eventually will start overflowing. It`s hard to get memcached storage capacity as big as Persistent Storage capacity. When it happens DB Client going to fallback to Persistent DB for records that been pushed out of Temp Storage, we need to make sure that Queue Processing is done for the matching queued records. 
  2. Temp Storage based on memcached is not that reliable and if it goes down we might loose data consistency for short period of time until current data in queue will be propagated to persistent DB. It`s reliability can be improved, but let`s look at what might happen in this scenario. First thing that will happen is that users will temporary loose their changes for records that still in the queue. This might not be that bad considering it'll happen just for short period of time. But if user that already had change waiting in the queue in that moment will submit yet another change to system it might lead to situation wen change will be permanently lost. 
  3. This design relies very heavily on Queue Processing job to be reliable and fast enough. So it should be well designed. On other hand this design allows Queue Processing job to be temporarily stopped (for some maintenance tasks) without affecting end user. That is as long as Temporary storage is big enough and job is fast enough to  catch up with queued changes later.
  4. As any key value storage this design has difficulty dealing with concurrent updates in same record. 


Thursday, February 18, 2010

Tokyo Tyran/Memcached compatiblity issue

Recently I was playing with using Tokyo Tyrant with Spy Memcached client. And discovered that according to Tokyo Tyrant docs: "flags", "exptime", and "cas unique" parameters are ignored. And this is causing Spy memcached client to not be able operate with serialized objects correctly. basically whenever I save serialized object there I get String object in return.
After some hacking I managed to come up with this little class that stores "flag" data into byte array itself:

import net.spy.memcached.CachedData;
import net.spy.memcached.transcoders.SerializingTranscoder;

/**
* TTSerializingTranscoder makes spymemcached client work correctly with TokyoTyrant by working around fact that
* Tokyo Tyrant is not storing put metadata as defined in memcached protocol
*/
public class TTSerializingTranscoder extends SerializingTranscoder {

@Override
public Object decode(CachedData d) {
byte[] result = d.getData();
byte[] data = new byte[result.length - 4];
byte[] flag = new byte[4];
System.arraycopy(result, 0, flag, 0, 4);
System.arraycopy(result, 4, data, 0, result.length - 4);
int flags = byteArrayToInt(flag);
return super.decode(new CachedData(flags, data, getMaxSize()));
}

public CachedData encode(Object o) {
final CachedData res = super.encode(o);
byte[] b = res.getData();
byte[] data = new byte[b.length + 4];
final int flags = res.getFlags();
System.arraycopy(intToByteArray(flags), 0, data, 0, 4);
System.arraycopy(b, 0, data, 4, b.length);
return new CachedData(flags, data, getMaxSize());
}

public static byte[] intToByteArray(int value) {
return new byte[]{
(byte) (value >>> 24),
(byte) (value >>> 16),
(byte) (value >>> 8),
(byte) value};
}


public static int byteArrayToInt(byte[] b) {
return (b[0] << 24)
+ ((b[1] & 0xFF) << 16)
+ ((b[2] & 0xFF) << 8)
+ (b[3] & 0xFF);
}
}


to use this class I had to extend DefaultConnectionFactory (see code below) and pass it to MemcachedClient constructor (new MemcachedClient(new SpyConnectionFactory(TTSerializingTranscoder.class)));)



import net.spy.memcached.DefaultConnectionFactory;
import net.spy.memcached.transcoders.SerializingTranscoder;
import net.spy.memcached.transcoders.Transcoder;

/**
*
*/
public class SpyConnectionFactory extends DefaultConnectionFactory {
private Class transcoderClass = SerializingTranscoder.class;

public SpyConnectionFactory() {
}

public SpyConnectionFactory(Class transcoderClass) {
this.transcoderClass = transcoderClass;
}

@Override
public Transcoder<Object> getDefaultTranscoder() {
try {
//noinspection unchecked
return transcoderClass.newInstance();
} catch (Exception e) {
throw new RuntimeException("Failed to create transcoder for " + transcoderClass, e);
}
}

public Class getTranscoderClass() {
return transcoderClass;
}

public void setTranscoderClass(Class transcoderClass) {
this.transcoderClass = transcoderClass;
}
}

Tuesday, September 1, 2009

Spring DM , OSGI and WAR deployment

I spent some time trying to figure out how to run Tomcat web app under Spring DM and Apache Felix OSGI container.
First let`s look at ./conf/config.properties , here is list of modules to load:

1) Felix core and felix shell:
file:bundle/org.apache.felix.shell-1.2.0.jar \
file:bundle/org.apache.felix.shell.tui-1.2.0.jar \
file:bundle/org.apache.felix.bundlerepository-1.4.0.jar \
2) Spring DM jars:
file:bundle/com.springsource.net.sf.cglib-2.1.3.jar \
file:bundle/com.springsource.org.aopalliance-1.0.0.jar \
file:bundle/com.springsource.slf4j.api-1.5.0.jar \
file:bundle/com.springsource.slf4j.log4j-1.5.0.jar \
file:bundle/com.springsource.slf4j.org.apache.commons.logging-1.5.0.jar \
file:bundle/log4j.osgi-1.2.15-SNAPSHOT.jar \
file:bundle/org.springframework.aop-2.5.6.A.jar \
file:bundle/org.springframework.beans-2.5.6.A.jar \
file:bundle/org.springframework.context-2.5.6.A.jar \
file:bundle/org.springframework.core-2.5.6.A.jar \
file:bundle/org.springframework.web-2.5.6.A.jar \
file:bundle/spring-osgi-core-1.2.0.jar \
file:bundle/spring-osgi-extender-1.2.0.jar \
file:bundle/spring-osgi-io-1.2.0.jar \
3) Servlet API jar
file:bundle/com.springsource.javax.servlet-2.4.0.jar \
4) Tomcat Catalina and Tomcat starter jars:
file:bundle/catalina.osgi-5.5.23-SNAPSHOT.jar \
file:bundle/catalina.start.osgi-1.0.0.jar \
5) Spring DM Tomcat integration jars
file:bundle/spring-osgi-web-1.2.0.jar \
6) Spring DM web extender , it listens when war bundle gets deployed and hooks it up with tomcat
file:bundle/spring-osgi-web-extender-1.2.0.jar \
7) and finally war file itself
file:bundle/ui.war

to make war file work is OSGI environment I added some properties /META-INF/MANIFEST.MF file, here is ant script code that does it:

<target name="war" depends="build" description="Create a war file">
<pathconvert property="jar.classpath" pathsep=", ">
<mapper>
<chainedmapper>
<flattenmapper/>
<globmapper from="*" to="WEB-INF/lib/*"/>
</chainedmapper>
</mapper>
<path>
<fileset dir="./war/WEB-INF/lib"/>
</path>
</pathconvert>

<mkdir dir="build/jars"/>
<jar destfile="build/jars/ui.war"
basedir="war">
<manifest>
<attribute name="Bundle-Version" value="1.0"/>
<attribute name="Bundle-ManifestVersion" value="2"/>
<attribute name="Web-ContextPath" value="ui"/>
<attribute name="Bundle-SymbolicName" value="com.ddao.ui"/>
<attribute name="Bundle-Name" value="My UI"/>
<attribute name="Export-Package" value="com.ddao.ui"/>
<attribute name="Import-Package"
value="javax.servlet,javax.servlet.http,javax.servlet.resources,org.springframework.osgi.web.context.support,org.osgi.framework,org.springframework.web.context,org.springframework.web.context.support,org.springframework.beans.factory.config"/>
<attribute name="Bundle-Classpath" value="WEB-INF/classes, ${jar.classpath}"/>
</manifest>
</jar>
</target>


I added next lines to web.xml to make sure Spring is using right application context type:


<context-param>
<param-name>contextClass</param-name>
<param-value>
org.springframework.osgi.web.context.support.OsgiBundleXmlWebApplicationContext
</param-value>
</context-param>

<listener>
<listener-class>
org.springframework.web.context.ContextLoaderListener
</listener-class>
</listener>



Spring configuration is expected to be in /WEB-ING/applicationContext, here is example:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:osgi="http://www.springframework.org/schema/osgi"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/osgi http://www.springframework.org/schema/osgi/spring-osgi-1.0.xsd">

<osgi:reference id="osgiSerivceExample" interface="com.ddao.services.OsgiExampleService" cardinality="0..1"/>

<bean id="myDataServlet" class="com.ddao.ui.server.MyDataServlet">
<property name="osgiService" ref="osgiSerivceExample"/>
</bean>
</beans>


To inject OSGI services into servlet I used autowire factory:

@Override
public void init(ServletConfig servletConfig) throws ServletException {
super.init(servletConfig);
final WebApplicationContext applicationContext = WebApplicationContextUtils.getWebApplicationContext(getServletContext());
applicationContext.getAutowireCapableBeanFactory().configureBean(this, "myDataServlet");
}



hope this will help you to save time.

Sunday, April 19, 2009

May be we need both RDBMS and key-value storage together?

Recent trend in high scalability community is to move from relational DB storage model to key value pairs. Problem with this approach comes when you need to store lists of references between objects:

  • When you need to add or remove value in the list you have to read and write whole list, and it can be big.

  • Conflict resolution between list changes is very difficult to deal with.

  • It’s complex to keep track on references to objects when you delete them.



  • So here is idea: we could combine RDBMS and key-value storage and use each of storage paradigms to deal with part that they do best. RDBMS can nicely manage references between objects using object IDs. And key value storage can deal with storing serialized objects content. This way we can leverage tooling RDBMS provides to deal with references between objects and still move big part of IO load to easy scalable key value storage. It should take care of problems #1 and #2 very well, little bit more difficult problem to deal with is #3, it can be easy to find records that refer to object that we delete id you don’t have sharding, which might be the option for site with middle scale since we already moved significant part of IO to key value storage, but for high scale site sharding is a must and in this situation you probably will have to setup some sort of garbage collection background process that removes refs to deleted objects in all shards.
    Important part of this idea is how implement API for such system. I’m working now on implementation of this idea within Dynamic Dao (ddao.sf.net) framework. At this point I plan to make it like this:


    public interface FooDao {
    @SelectCachedBeans(“keyValueStorageName”, 
    “select foo_id from foo_ref where id=#0# start #1# limit #2#”)
    List getFooForUser(long userId, int start, int lmit);
    }

    This logic will execute call to JDBC for given SQL statement, get list of IDs, retrieve cached objects and return them in the list.

    Followers