Category Archives: Android

Mobile app development full lifecycle – part 1

Brainstorming

This is a really exciting stage. An idea hits you while travelling on the bus. What is missing from this world? A facebook for dogs of course! A million dollar idea.

In case you are a visual thinker, you quickly draw up screens in your head. You envision experiences, key moments in using the app. You quickly realize that this would be a very silly app, but that’s okay. It’s okay to throw out most of the ideas at this stage. Let the right side of the brain do it’s thing.

How about a pixel graphics creator app for iPad? Let’s work with this from now on.

Feature list

Let’s break up the app into a set of features. Put them on a list to be able to manage them.

Obviously a pixel graphics creator app needs a canvas.
Creating lines easily might be useful, put this also on the list.
The canvas will be zoomed in most of the time, yet it’s important to see the full picture in original size. This can be solved by a secondary non-interactive view with bird’s eye view.
Predefined color palettes! It’s hard to use color pickers, entering codes for colors are also not user friendly on mobile. Palettes solve this problem.
Animation support!
And so on.

When the list is done, let’s mark those features that are essential for the first working version of the app. These will be our MVP. We mark the canvas, secondary view, exporting, simple version of color palette as MVP. The rest are nice to haves for now.

Closing remarks

We should do a minimal market research at this stage. Check if there is a very similar app in the app store already. Even if there is, this doesn’t mean there isn’t room for a better one. But the app store is a very competitive place, with an awful search engine. It’s better to face the oncoming troubles of marketing early.

Mockups

Interface builder is great, but it’s not good for mockups. I personally use Balsamiq Mockups and pen and paper.

When I designed a side-scroller platform game for iOS and Android, I cut out from cardboard all the screen sizes I wanted to support and started to play with them. I wanted to provide pixel perfect graphics on key devices, I also wanted to minimize the amount of art work as I was really slow at that then. By playing with the cardboard I found the solution. The background was created for the biggest height, but also made sure that the middle section – where all the interaction happens – also fits on the smallest height device. The rest of the background is just for visual enjoyment, so it was not a big deal when the top and bottom was cut on small devices. This also enabled a nice feature of imitating an earthquake cheaply at a key point in the game. As the screen scrolled horizontally, I didn’t have a problem with the width.

All I wanted to say is to use whatever works for you. But if you are not working alone and have to share the mockups, using a digital tool like Balsamiq can be much simpler than, e.g. scanning your hand drawings.

The mockups helps in concretizing the product. It often turns out that many features simply don’t fit on the screen. You have to iterate on them a lot and sometimes have to make painful decisions. Like when a critical feature needs to be cut out.

When the main screens are finished, the main interactions, the application flow should be designed. Be vigilant to think about possible error paths at every step. You could spend most of your time implementing the error paths, not the happy paths. In case you can simplify handling errors on the UI side (like dedicated UI element to indicate errors asynchronously) at this stage, it will save you tremendous amount of time later.

When all the screens, and application flow is done. The last step in this stage is to show it all to someone. Get some early input. They could point out major weaknesses in the design. It’s cheaper to make changes now, than later during implementation.

I’ll continue with coding the first prototype in the following post.

Don’t tie your engineers hands

This blog post is about how companies waste a lot of time and money by forcing a bad method of working on their mobile engineers.

It usually starts with the old and seemingly wise observation that writing code twice is bad. Now some managers stop right here and make this a mantra for the company. By which they hurt their own business. I’ll explain why and show that implementing the “same” feature twice is sometimes the best thing one could do.

I’ll walk through some common scenarios that comes up at companies, like they have an iOS app and they want to port it to Android ASAP. At the same time I’ll probably rant about my previous bad experiences.:)

At first a little about me. I started working on Android in 2009, and on iOS in 2010. I use mostly C++, Objective-C and Java on mobile. I worked both on apps and mobile games.

1. Don’t use the same UI on all platforms

This isn’t that common nowadays, but almost every company made this mistake when they started working on Android. Usually they had an already successful iOS app and they wanted to release an Android version. Android was nowhere near as important for them as iOS, so they wanted to cut corners wherever they could. They started by reusing the design of the iPhone app. A phone is a phone, what harm could it do if they look the same? Well, as it turns out plenty.

Let’s first look at it from the high level. The problem is that every mobile platform has different UI patterns. Users gets accustomed to their chosen device’s UI. If an app uses a different pattern they will get confused or even get angry. One good example is when there is an iOS Back button in an Android app, and to make it worse it does not respond to the hardware back button (which is present on every Android device). This often means an instant uninstall.

But this choice will affect the engineers work too. If a UI pattern is not present on one of the platforms, then they have to implement it from scratch. It means wasted time, and bugs. But even when they have a seemingly similar Widget on both platforms, subtle differences in them could cause a lot of problem. I’ll talk about an example in detail below, but before that I’ll mention an exception.

Exception to this rule

If the existing app already has a unique UI then it perfectly makes sense to use the same design on both platforms.

Personal experience

I started working at Ustream in 2009. I spent almost 4 years there. We made the same mistake as above. Our first Android app used iOS buttons. We didn’t make all the mistakes that can be made though, like not responding to Android buttons, but it still looked liked an iOS app. I actually loved that first version.:) Because the Android UI was super ugly back then compared to iOS.

Half a year later they split the Android team, and I became the lead of one of them. We worked on the core apps and the other team started to work on a new ambitious product. Their product manager wanted to use the same UI patterns on both iOS and Android. He knew and liked iOS, so naturally this meant tons of extra work for the Android devs.

Where this approach failed miserably was the TabBar (TabActivity) widget. Android had a TabBar just like iOS. Seemingly the only difference between them was that the buttons were at the top on Android. As it turned out the Android implementation was so horrible, it was unsuitable to be used in a real app. Google get rid of that view later. There is a better one on Android now.

When the engineers realised this they tried to convince the product manager to use a different pattern on Android for this particular feature. They didn’t succeed. So they ended up reimplementing an iOS like TabBar on Android. It turned out to be a huge task. What supposed to be a simple few days long task became a several month long job. In the end, for various reasons that product was cancelled, and the product manager has been let go.

Conclusion

Unless your app already has a very unique UI, use platform specific UI patterns. By the way, this is a prerequisite for the app to became featured in it’s app store.
It means more work for the designers, but it also means much less work for the engineers as they can use the best design patterns for the given platform.

An important fact of the mobile platforms is that they evolve very quickly. If you force to put extra layers on the APIs in any way, you’ll find that most of the time is spent on maintaining those layers. And you will cut yourself off from using the new best patterns, or the hottest new features. For these reasons I always recommend using the native APIs for app development and suggest to stay away from PhoneGap and similar offerings.

2. Feature parity should not mean source code parity (apps)

For many companies feature parity across all platforms have the highest priority. The problem is that – in my experience – most companies wants to achieve this by forcing source code parity. Like having the same classes in Java and Objective-C. Or generate code from one platform to the other. Maybe there is something wrong with me, but I never understood how and why one leads to another. :)

Personal experience

There were several mobile teams at Ustream. (Symbian, iOS, Android, Blackberry, Windows Mobile, Maemo …) Implementing Ustream on mobile was very challenging, as there was no public API on any platform for what we wanted to do. Sharing best practices and even code – when it made sense – came up constantly. There weren’t any arranged meetings to discuss these. But we were sitting close to each other and were always curious how the others solved a difficult problem on their own platform. I enjoyed working there a lot at that time.

We naturally knew that as the platforms are so alien to each other, there is little room for sharing codes between them. We had a C library that was shared between the iOS and Android teams for a few common tasks, but nothing else.
But this doesn’t mean that we didn’t port codes from one OS to the other. We did that several times, but on the other hand we found that making time to adjust the code for the given platform always worth it in the end.

Interestingly enough, it came up once that the BB team and the Android team should share the same code base, because you know … both use Java. ;) But thankfully the managers quickly rejected that idea.

BB had Java ME (based on Java 1.3), Android is based on Java 1.5, and our code base was a combination of C, C++, Java and ARM assembly. So comparing BB and Android is like comparing Java and Javascript. ;)

Conclusion

The most important thing is that the teams owned their own code. We could use the best design patterns, the best tools the platform provided to us. I can’t emphasize this enough.
Every team was as fast implementing a new feature as fast they could get. We were not hindered by adhering to anyone. But we shared our knowledge and ported code from each other when it made sense.

Exception to this rule

None.
Seriously, do yourself a favor and stay away from JS on mobile.:)

2. Feature parity should not mean source code parity (games)

Let me start with an exception here, because it’s more important.:)

Exception to this rule

If you start a game from scratch – as games usually don’t use OS specific features (or only a few) -, I think the best approach is to use a cross platform engine. I would prefer to use my own C++ engine. But Unity or the Unreal engine could be a good choice too.
LibGDX is very good for Android, and it’s cross-platform, but I personally wouldn’t use it for iOS.
I don’t recommend cocos2d-x. It’s one of the ugliest engine out there that you can find.

Porting from iOS to Android

The most common scenario is that you have a cocos2d game and you want to port it to Android.

I think there isn’t a good approach for this. There are only bad ones and super bad ones. I think the best thing you could do is to rewrite it from scratch in a good cross platform engine. That way at least the team could learn a more useful engine.

Having two separate teams, one for iOS and one for Android could work too, in that case they could use the best engine/methods on their chosen platform. Reaching feature parity can be reached easily by having the same product manager (or producer) for the teams. They would probably implement a given feature in a different time scale, but they would usually be a lot faster than if they would have been tied by shared code somehow.

And here comes a super bad choice.

Use cocos2d-x, just because it has a similar API to cocos2d.

Objective-C devs might like this at first, as cocos2d-x use similar design patterns as objective-c. But as they get to know C++, they would either start hating C++ for not having a dynamic runtime or would start hating cocos2d-x for being the ugliest C++ engine out there. Did I mention that it is also slow? :)

Reaching feature parity would be the least of your problems. Because you would encounter a scenario where one thing works in cocos2d, but it doesn’t work in cocos2d-x. Then you either have to hold back the cocos2d team, or you would have to release a lower quality game on iOS. I think no sane game dev wants to use cocos2d-x the second time.

Porting from flash to mobile

This is another common scenario.

I don’t know ActionScript well. But I think it’s common knowledge now that it’s not good for mobile. I think most companies doesn’t think to reuse ActionScript code on mobile. When this time comes in a company’s life it is best to hire engineers with mobile experience.

Personal experience (cocos2d-x)

I worked for a few months at a game dev company. I was hired to find the best approach porting their existing iOS (cocos2d) games to Android. They already worked with an external company who ported one of their games to Android. I think many things can be learned from their mistake. They choose cocos2d-x, because well, it looks like cocos2d. This seems a reasonable move if you don’t know already cocos2d-x. After that, their approach was that every other week they manually rewrote every objective-c line to c++. This is where I threw an exception. :)

My first reaction was that I was amazed that this method worked. I mean they reimplemented half of the Foundation framework, some of Cocoa Touch, they even found a way to mimic objective-c’s runtime. I think what they achieved there is commendable. On the other hand the end result was a much slower game, than the original. The code was super ugly. The engineers did not own the code. They were more like robots, who manually rewrote every line of code to cocos2d-x. They were game devs who were forced to never made a single creative decision during work.

They worked like this to reach feature parity on both platform. The funny thing is that as far as I know they never get there. The iOS app was always several steps ahead of them.

The entire rewrite took them more time than what the iOS team needed to release the original game on iOS. I think this alone would have been enough to abandon this method forever. But the end of the story is that I couldn’t convince a manager to try out a new approach instead of this completely broken one, so I didn’t stay there long.

Conclusion

I think it’s always a bad move to tie your engineers hands in any way. I’ve yet to see a good enough reason for that. The sad truth is that most of the time when I encountered this at different companies, they didn’t reach their original goal. Sometimes rewriting a code from scratch is not the worst choice you can make. Reaching feature parity does not require to bound two source code together. By doing that you would instantly put a serious burden on every engineer in your team, with little hope to achieve the original goal. Sharing knowledge between teams is much more important than putting artificial chains on them.

Optimizing Access to Raw Camera Frames on Android – Part 3.

This is the final post in this series. In the previous posts I have shown a way to gain access to the raw camera frames by using private APIs. As it is very dangerous to use hidden APIs, extra care must be taken during error handling. As I mentioned in the first post, there is no need for hacks on Android 2.2 and above, as there is an improved setPreviewCallbackWithBuffer method there that makes all these hacks unnecessary.

We are done with 1.5 and 1.6 versions. All that is left is 2.0 and 2.1. The good news is that 2.0 is basically extinct so we don’t have to worry about it. The even better news is that the improved previewCallback function can be used on 2.1 versions too! It was already there in the public APIs, but was hidden. So we have to use reflection to reach it.

Before starting to write a single line of code, let’s think for a second about what is the best way to add this new functionality to the app. In the previous posts we followed the pattern that most of the “logic” was in the native parts of the code. The camera handling was managed from the native code. But this new API is in Java. So if we want to keep this logic to not mess up the existing code base, we have to find a way to add this new type of camera handling to the LoadManager class. One way to do achieve this, is to create a fake native Camera class – that implements MyICam – and implement their methods as that they do nothing else just calling back to the Java layer.

Let’s start with the CameraPreview.java class. This is where we would use the new camera API.

CameraPreview.java


class CameraPreview extends SurfaceView implements SurfaceHolder.Callback {
        private final static String TAG = "CameraPreview";

	private static Method addCBBuffer=null;
	private static Method setPreviewCB=null;
	private static boolean hasNewCameraApi = false;
	private static int bufSize = 115200;
	private static PreviewCallback cb = null;

	protected static SurfaceHolder mHolder;
	private static Camera camera=null;

	static {
		checkForNewCameraApi();
	};

	private static void checkForNewCameraApi() {
		try {
			Class clazz = Class.forName("android.hardware.Camera");
			addCBBuffer = clazz.getMethod("addCallbackBuffer", new Class[]{ byte[].class });
			setPreviewCB = clazz.getMethod("setPreviewCallbackWithBuffer", new Class[]{ PreviewCallback.class });
			hasNewCameraApi = true;
		} catch (ClassNotFoundException e) {
			Log.e(TAG, "Can't find android.hardware.Camera class");
			e.printStackTrace();
		}  catch (NoSuchMethodException e) {
			hasNewCameraApi = false;
		}
	}

	private static boolean addCallbacks(){
    	if(hasNewCameraApi){
    		PixelFormat p = new PixelFormat();
    		PixelFormat.getPixelFormatInfo(PixelFormat.YCbCr_420_SP, p);
    		byte[] buffer1 = new byte[bufSize];
    		byte[] buffer2 = new byte[bufSize];
    		byte[] buffer3 = new byte[bufSize];
    		try {
				addCBBuffer.invoke(camera, buffer1);
				addCBBuffer.invoke(camera, buffer2);
				addCBBuffer.invoke(camera, buffer3);
			} catch (IllegalArgumentException e) {
				Log.e(TAG, "..." , e);
				return false;
			} catch (IllegalAccessException e) {
				Log.e(TAG, "..." , e);
				return false;
			} catch (InvocationTargetException e) {
				Log.e(TAG, "..." , e);
				return false;
			}
			return true;
    	}
    	return false;
	}

//these will be called from native code only
    public static void initCameraFromNative(){
    	camera = Camera.open();
    	if (camera!=null) {
	    	try {
				camera.setPreviewDisplay(mHolder);
			} catch (IOException e) {
				Log.e(TAG, "..." , e);
			}
	    	Camera.Parameters parameters = camera.getParameters();
	    	parameters.setPreviewSize(320, 240);
	    	parameters.setPreviewFormat(PixelFormat.YCbCr_420_SP);
	    	parameters.setPreviewFrameRate(15);
	    	camera.setParameters(parameters);
    	}
   }
   public static void releaseCameraFromNative(){
    	if (camera != null){
    		camera.release();
    		camera = null;    		
    	}
    }

    //true means: start
    //false means: stop
    public static void setRecordingCallbackFromNative(boolean action){
    	if(action){
    		if(!addCallbacks()){
    			hasNewCameraApi = false;
    		}
    		if(hasNewCameraApi){
    			try {
					setPreviewCB.invoke(camera, new Object[]{
							(cb = new PreviewCallback() {

								@Override
								public void onPreviewFrame(byte[] data, Camera camera) {
data.length);
									Native.previewCallback(data);
									try {										
										addCBBuffer.invoke(camera, data);
									} catch (IllegalArgumentException e) {
										Log.e(TAG, "..." , e);
									} catch (IllegalAccessException e) {
										Log.e(TAG, "..." , e);
									} catch (InvocationTargetException e) {
										Log.e(TAG, "..." , e);
									}
								}
							}) });
				} catch (IllegalArgumentException e) {
					Log.e(TAG, "..." , e);
					hasNewCameraApi = false;
				} catch (IllegalAccessException e) {
					Log.e(TAG, "..." , e);
					hasNewCameraApi = false;
				} catch (InvocationTargetException e) {
					Log.e(TAG, "..." , e);
					hasNewCameraApi = false;
				}
				//fallback to the old setPreviewCallback
				if(!hasNewCameraApi){
					camera.setPreviewCallback(new Camera.PreviewCallback() {

	    				@Override
	    				public void onPreviewFrame(byte[] data, Camera camera) {
	    					Native.previewCallback(data);
	    				}
	    			});
				}
    		} else{
			//old setPreviewCallback
    			camera.setPreviewCallback(new Camera.PreviewCallback() {

    				@Override
    				public void onPreviewFrame(byte[] data, Camera camera) {
    					Native.previewCallback(data);
    				}
    			});
    		}
    	} else {
    		camera.setPreviewCallback(null);
    	}
    }

//...other methods

The class above checks for the existence of the new Preview callback API in its static constructor. If it founds the new APIs, then it it will try to use them instead of the old APIs. All of these methods make sense only if we keep in mind that we want to control these events from native code. All functions ending in …FromNative will be called from native code. As it can be seen, the code will fall back to the old public APIs if something goes wrong with the new one. This might or might not what you want in your app.

We have to create a native Camera class that calls back to Java on Android 2.1 and above. Let’s call this class JCamera. It is very easy to implement, because it does nothing else just calls back to the appropriate method in CameraPreview.java. So let’s take a look instead at the changes of LoadManager.cpp.

LoadManager.cpp


MyICam* LoadManager::createCam(){
	char *error=NULL;
	if(fallback){
		return new JCamera;
	} else{
		if(!handle){
			char dir[150];
			switch(SDK){
			case -1:
				LOGE("SDK is not set!");
				fallback = true;
				return new JCamera;
				break;
			case 3:
				snprintf(dir, 150, "%s/lib/libcupcakecam.so",dataDir);
				handle = dlopen(dir, RTLD_LAZY);
				if(!handle){
					fallback = true;
					return new JCamera;
				}
				break;
			case 4:
				snprintf(dir, 150, "%s/lib/libdonutcam.so",dataDir);
				handle = dlopen(dir, RTLD_LAZY);
				if(!handle){
					fallback = true;
					return new JCamera;
				}
				break;
			}

			createCamera = (createCamera_t*) dlsym(handle, "create");
			const char* dlsym_error = dlerror();
			if(dlsym_error){				
				fallback = true;
				handle = NULL;
				return new JCamera;
			}

			destroyCamera = (destroyCamera_t*) dlsym(handle, "destroy");
			dlsym_error = dlerror();
			if (dlsym_error) {
				fallback = true;
				handle = NULL;
				return new JCamera;
			}
		}
		return createCamera();
	}
}

void LoadManager::destroyCam(MyICam* iCam){
	if(fallback){
		delete iCam;
	} else{
		destroyCamera(iCam);
	}
}

Summary

The problem was to find a better way to get access to the preview frames from the camera than what the official API offered, as it had a major flaw under Android 2.2.
The solution was to use private APIs, specific to 1.5 and 1.6 versions, and to use reflection on 2.1. To avoid polluting the code base with dangerous private API calls we used the bridge pattern and dynamic loading to separate them into their own so files. The camera handling was managed from the native code at all steps even on 2.1 and above versions. This made available that the main code base remained untouched.

Optimizing Access to Raw Camera Frames on Android – Part 2.

In the previous post I have shown a way how to utilize the private Camera API on Android 1.5. If you run that code on later Android OS versions it will crash. If you don’t want to release a separate app for all OS versions – which would be very ugly – you have to solve this problem somehow. At first let’s check what has changed on Android 1.6 compared to 1.5. You can browse the source code at the time of this writing on GitHub. The file you should check out first is ui/Camera.h.
One major part that changed is the callback handling. The recording/preview callbacks from 1.5 have been moved to a separate CameraListener class on 1.6. There are other changes too, but those don’t seem to be that severe.

Let’s assume that we were able to utilize the 1.6 Camera API too, just as we did with on 1.5. There are still some problems after that. First, it would be good if we could put the camera handling codes under a common interface and the rest of our code won’t have to bother about on which Android version it will run. Secondly, and this one is more important than the previous one, we have to find a way to separate the private API calls from the main codebase. Because System.loadLibrary() could fail on us if our code can call both 1.5 and 1.6 codes.

These problems can be solved by applying the Wrapper/Adapter design pattern for the first one, and applying the Bridge design pattern combined with dynamic loading for the second one. So we create a camera interface, e.g. let’s call it MyICam. It will be capable of encapsulating the common methods that can be called on a Camera – like startPreview() -, it will have it’s own error messages, camera flags and constants that can serve both the 1.5 and 1.6 APIs. Next we create the OS specific CupCam and DonutCam classes that are implementing the MyICam interface. We compile these into two separate dynamic libraries, the CupCam in the 1.5 Android source, the DonutCam in the 1.6 Android source. And at last we create a manager class that loads the right camera class for the current OS it runs on and returns a MyICam pointer hiding the concrete implementation from the rest of the code.

Let’s start with a possible implementation for the interface MyICam.

MyICam.h.


#ifndef MYICAM_H
#define MYICAM_H

#include <ui/Surface.h>

namespace android{

typedef void (*frame_cb)(void* mem, void *cookie);
typedef void (*error_cb)(int err, void *cookie);
typedef void (*autofocus_cb)(bool focused, void *cookie);

class MyICam{

public:
	//flags
	static const int F_JUSTPREVIEW = 0x00;
	static const int F_CAMCORDER = 0x01;
	static const int F_CAMERA = 0x05;
	//errors
	static const int M_NO_ERROR = 0;
	static const int M_DEAD_OBJECT = -32;
	static const int M_UNKNOWN_ERROR = 0x80000000;

	static const int PREVIEW_FORMAT_YUV420SP = 0x01;
	static const int PREVIEW_FORMAT_YUV420P = 0x02;

	int previewFormat;

	virtual ~MyICam(){};
	virtual void setSurface(int* surface)=0;
	virtual bool initCamera()=0;
	virtual void releaseCamera()=0;
	virtual void setRecCallback(frame_cb cb, void* cookie, int flag=F_CAMERA)=0;
	virtual void setErrCallback(error_cb ecb, void* cookie)=0;
	virtual void setAutoFocusCallback(autofocus_cb cb, void* cookie)=0;
	virtual void startPreview()=0;
	virtual void stopPreview()=0;
	virtual void autoFocus()=0;

	int getPreviewFormat() {return previewFormat;}
	void setPreviewFormat(int f) {previewFormat=f;}

};

// the types of the class factories
// These must be implemented on the name of "create" and "destroy"
typedef MyICam* createCamera_t();
typedef void destroyCamera_t(MyICam*);

}//namespace

#endif /* MYICAM_H */

The interface has a rich set of features, even some of which you don’t necessarily need in your project, like autofocus handling. Most device’s camera returns frames in YUV420SP format, so you could erase those too. The narrower the feature set of the interface, the less work you have to do to implement them in the concrete classes.
The last two typedefs could be confusing at first. They are needed because dlopen() and dlsym() are inherently C APIs and can’t deal with c++ objects. We have to implement two C functions – create and destroy -, that can be loaded by dlsym(). They will be responsible to create and delete the c++ camera objects for us.

Let’s see an implementation for DonutCam.h and DonutCam.cpp.

DonutCam.h


#ifndef DONUTCAM_H
#define DONUTCAM_H

#include "MyICam.h"

#include "android_runtime/AndroidRuntime.h"

#include "IMemory.h"
#include <ui/Surface.h>
#include "ui/Camera.h"

namespace android{

struct CamContext{
	frame_cb rec_cb;
	void* rec_cb_cookie;
};

struct AutoFocusContext{
	autofocus_cb af_cb;
	void* af_cb_cookie;
};

class DonutCam : public MyICam {
	bool hasCamera;
	bool hasListener;
	sp<Camera> camera;
	sp<Surface> mSurface;
	error_cb err_cb;
	void* err_cb_cookie;
	int rec_flag;
	CamContext* context;
	AutoFocusContext* aFContext;
	autofocus_cb autoFocusCallback;

public:
	DonutCam();
	virtual ~DonutCam();
	virtual void setSurface(int* surface);
	virtual bool initCamera();
	virtual void releaseCamera();
	virtual void setRecCallback(frame_cb cb, void* cookie, int flag=F_CAMERA);
	virtual void setErrCallback(error_cb ecb, void* cookie);
	virtual void setAutoFocusCallback(autofocus_cb cb, void* cookie);
	virtual void startPreview();
	virtual void stopPreview();
	virtual void autoFocus();
};

extern "C" MyICam* create() {
    return new DonutCam;
}

extern "C" void destroy(MyICam* iCam) {
    delete iCam;
}

}//namespace

#endif /* DONUTCAM_H */

At next, let’s see the interesting parts of DonutCam.cpp, e.g. where it differs most from CupCam.cpp.

DonutCam.cpp


namespace android{

class MyCamListener: public CameraListener
{
public:
    MyCamListener();
    ~MyCamListener() {release();}
    void setRecCallback(CamContext* context);
    void setAutoFocusCallback(AutoFocusContext* context);
    virtual void notify(int32_t msgType, int32_t ext1, int32_t ext2);
    virtual void postData(int32_t msgType, const sp<IMemory>& dataPtr);
    virtual void postDataTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& dataPtr);
    void release();

private:
    void copyAndPost(JNIEnv* env, const sp& dataPtr, int msgType);

    jobject     mCameraJObjectWeak;     // weak reference to java object
    jclass      mCameraJClass;          // strong reference to java class
    sp  mCamera;                // strong reference to native object
    CamContext* mContext;
    AutoFocusContext* mAFContext;
};

MyCamListener::MyCamListener(){	
	mContext = NULL;
	mAFContext = NULL;
}
void MyCamListener::setRecCallback(CamContext* context) {
	mContext = context;
}
void MyCamListener::setAutoFocusCallback(AutoFocusContext* context) {
	mAFContext = context;
}
void MyCamListener::notify(int32_t msgType, int32_t ext1, int32_t ext2){
	if (mAFContext && msgType == CAMERA_MSG_FOCUS) {
		if (mAFContext->af_cb){
			mAFContext->af_cb(ext1, NULL);
		}
	}
}
void MyCamListener::postDataTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& dataPtr){	
	postData(msgType, dataPtr);
}
void MyCamListener::postData(int32_t msgType, const sp<IMemory>& dataPtr){
	if ((dataPtr != NULL) && (mContext != NULL) && (mContext->rec_cb != NULL)) {
		ssize_t offset;
		size_t size;
		sp<IMemoryHeap> heap = dataPtr->getMemory(&offset, &size);
		unsigned char* caaInBuf = ((unsigned char*)heap->base()) + offset;
		mContext->rec_cb(caaInBuf, mContext->rec_cb_cookie);
	}
}

sp<CamListener> listener;

//rest of DonutCam.cpp ...

The rest of the CupCam and DonutCam codes are not very interesting after the previous post.

As I mentioned above we need a some kind of manager class that loads the appropriate camera object for the given OS version. Let’s call this class LoadManager.

LoadManager.h


//...includes
#include "MyICam.h"

namespace android{

class LoadManager {
	int SDK;
	char* dataDir;
public:
	LoadManager();
	~LoadManager();

	void setSDKVersion(int sdk);
	void setDataDir(const char* datadir);

	MyICam* createCam();
	void destroyCam(MyICam*);
};

}//namespace

LoadManager.cpp


namespace android {

createCamera_t* createCamera;
destroyCamera_t* destroyCamera;
void* handle = NULL;

void LoadManager::setSDKVersion(int sdk) {
	this->SDK = sdk;
}

MyICam* LoadManager::createCam(){
	char *error=NULL;
	if(!handle){
		char dir[150];
		switch(SDK){
		case 3:
			snprintf(dir, 150, "%s/lib/libcupcam.so",dataDir);
			handle = dlopen(dir, RTLD_LAZY);
			if(!handle){
				//Need to handle it later!
			}
			break;
		case 4:
			snprintf(dir, 150, "%s/lib/libdonutcam.so",dataDir);
			handle = dlopen(dir, RTLD_LAZY);
			if(!handle){
				//Need to handle it later!
			}
			break;
		}

		createCamera = (createCamera_t*) dlsym(handle, "create");
		const char* dlsym_error = dlerror();
		if(dlsym_error){
			//Need to handle it later!
		}

		destroyCamera = (destroyCamera_t*) dlsym(handle, "destroy");
		dlsym_error = dlerror();
		if (dlsym_error) {
			//Need to handle it later!
		}
	}
	return createCamera();
}


void LoadManager::destroyCam(MyICam* iCam){
	destroyCamera(iCam);
}

//...rest of LoadManager

Now we have everything that is needed to support both 1.5 and 1.6 versions at the same time within one application. We still need to compile the CupCam and DonutCam sources into separate so-s, but it’s an easy task after the previous post. We didn’t handle if something goes wrong during dlopen() or dlsym(), but it is not very apparent what we could do in that case. I’ll cover it in the following blog post.

Optimizing Access to Raw Camera Frames on Android – Part 1.

If you want to do some real time processing on the raw camera frames, e.g., you are working on an augmented reality app, then ensuring a high fps rate is essential. If you also want to support the 1.5 OS version of Android named cupcake, then you will quickly find that the camera API has a major flaw. On OS versions below 2.2 you are forced to use the old setPreviewCallback function. The problem with this method is that it allocates a new buffer for every preview frame, copies the frame to this buffer and gives it back in a callback. Apart from allocating memory is generally slow, the gc have to be called constantly to clean up the buffers. Setting the fps to 15 and the preview frame to the relative low resolution CIF format (352*288) will result in 2280960 byte allocation in every second. If we take a G1 phone with 192MB RAM – from which only 15-20MB can be used by a given application – it makes clear that this API can be very limiting.

I’ll show a method that bypasses the public preview callback API using private APIs. Using private APIs in production code is dangerous as it can broke on devices you have never saw or after a firmware update. I’ll also show some methods to minimize this risk by implementing fallback mechanisms in later blog posts.

You will need a linux or mac to build the Android source code yourself. Building the source codes on mac is a bit tricky. To get the source go to source.android.com. After the download, build it with make -j2. If this is the first time you see the Android source you should definitely explore it a bit. (***At the time of this writing the android source code can’t be downloaded because the kernel.org has been hacked a few weeks before. Try alternate download locations, maybe here.)

In the following sections I’ll create a small C++ app that utilizes the private Camera APIs. It would help a lot if you are familiar with the Android NDK (especially with JNI), however the app will be built inside the Android source and not with the NDK.

First create a directory under cupcake_src/external/ and name it mycam. After that create the following files CupCam.h, CupCam.cpp, native.cpp, Main.h, Main.cpp and Android.mk.

CupCam.h.


#ifndef CUPCAM_H
#define CUPCAM_H

//352*288*3/2=152064
#define BUFLENGTH 152064

//STRING8
#include "jni.h"
#include "JNIHelp.h"
#include "android_runtime/AndroidRuntime.h"

#include "utils/IMemory.h"
#include <ui/Surface.h>
#include "ui/Camera.h"

namespace android{

typedef void (*frame_cb)(void* mem, void *cookie);
typedef void (*error_cb)(int err, void *cookie);

struct CamContext{
	frame_cb rec_cb;
	void* rec_cb_cookie;
};

class CupCam {
	bool hasCamera;
	sp<Camera> camera;
	sp<Surface> mSurface;
	error_cb err_cb;
	void* err_cb_cookie;
	int rec_flag;
	CamContext* mCamContext;

public:
	CupCam();
	virtual ~CupCam();
	virtual void setSurface(int* surface);
	virtual bool initCamera();
	virtual void releaseCamera();
	virtual void setRecordingCallback(frame_cb cb, void* cookie, int flag=FLAG_CAMERA);
	virtual void setErrorCallback(error_cb ecb, void* cookie);
	virtual void startPreview();
	virtual void stopPreview();
};

}//namespace

#endif /* CUPCAM_H */

This class will handle us the Camera in native code bypassing the public Java Camera API.

CupCam.cpp


#define LOG_TAG "MyCupCam"
#include <utils/Log.h>

#define DEBUG_LOG 0

#include "CupCam.h"

namespace android{

volatile bool isDeleting=false;

void main_rec_cb(const sp<IMemory>& mem, void *cookie){

    CamContext* context = reinterpret_cast<CamContext*>(cookie);
    if(context == NULL) {
    	LOGE_IF(DEBUG_LOG,"context is NULL in main_rec_cb");
        return;
    }
    ssize_t offset;
    size_t size;
    sp<IMemoryHeap> heap = mem->getMemory(&offset, &size);
    unsigned char* inBuf = ((unsigned char*)heap->base()) + offset;

    if(!isDeleting){
    	context->rec_cb(inBuf, context->rec_cb_cookie);
    }
}

CupCam::CupCam() {
	LOGD_IF(DEBUG_LOG, "constructor");
	hasCamera = false;
	err_cb = NULL;
	err_cb_cookie = NULL;
	mCamContext = new CamContext();
	mCamContext->rec_cb = NULL;
	mCamContext->rec_cb_cookie = NULL;
	rec_flag=FRAME_CALLBACK_FLAG_NOOP;
}

CupCam::~CupCam() {
	LOGD_IF(DEBUG_LOG, "destructor");
	releaseCamera();
	if(mCamContext){
		delete mCamContext;
	}
}

void CupCam::setSurface(int* surface){
	LOGD_IF(DEBUG_LOG, "setSurface");
	mSurface = reinterpret_cast<Surface*> (surface);
}

bool CupCam::initCamera(){
	LOGD_IF(DEBUG_LOG, "initCamera");
	camera = Camera::connect();
	//make sure camera hardware is alive
	if(camera->getStatus() != NO_ERROR){
		LOGD_IF(DEBUG_LOG, "camera initialization failed");
		return false;
	}

	camera->setErrorCallback(err_cb, err_cb_cookie);

	if(camera->setPreviewDisplay(mSurface) != NO_ERROR){
		LOGD_IF(DEBUG_LOG, "setPreviewDisplay failed");
		return false;
	}

	const char* params = "preview-format=yuv420sp;preview-frame-rate=15;"
			"picture-size=355x288"
			";preview-size=355x288"
			";antibanding=auto;antibanding-values=off,50hz,60hz,auto;"
			"effect-values=mono,negative,solarize,pastel,mosaic,resize,sepia,posterize,whiteboard,blackboard,aqua;"
			"jpeg-quality=100;jpeg-thumbnail-height=240;jpeg-thumbnail-quality=90;jpeg-thumbnail-width=320;"
			"luma-adaptation=0;nightshot-mode=0;picture-format=jpeg;"
			"whitebalance=auto;whitebalance-values=auto,custom,incandescent,fluorescent,daylight,cloudy,twilight,shade";

	String8 params8 = String8(params, 510);
	camera->setParameters(params8);
	if(mCamContext->rec_cb){
		camera->setPreviewCallback(main_rec_cb, mCamContext, rec_flag);
		isDeleting=false;
	}
	hasCamera = true;
	return true;
}

void CupCam::releaseCamera(){
	LOGD_IF(DEBUG_LOG, "releaseCamera");
	if(hasCamera){
		isDeleting=true;
		camera->setPreviewCallback(NULL, NULL, FRAME_CALLBACK_FLAG_NOOP);
		camera->setErrorCallback(NULL, NULL);
		camera->disconnect();
		hasCamera = false;
	}
}

void CupCam::setRecordingCallback(frame_cb cb, void* cookie, int flag){
	LOGD_IF(DEBUG_LOG, "setRecordingCallback");
	CamContext* temp = new CamContext();
	temp->rec_cb = cb;
	temp->rec_cb_cookie = cookie;
	rec_flag=flag;
	if(hasCamera){
		if(temp->rec_cb == NULL){
			isDeleting=true;
			camera->setPreviewCallback(NULL, NULL, FRAME_CALLBACK_FLAG_NOOP);
		} else{
			camera->setPreviewCallback(main_rec_cb, temp, rec_flag);
			isDeleting=false;
		}
	}
	delete mCamContext;
	mCamContext=temp;
}

void CupCam::setErrorCallback(error_cb ecb, void* cookie){
	LOGD_IF(DEBUG_LOG, "setErrorCallback");
	err_cb=ecb;
	err_cb_cookie=cookie;
	if(hasCamera){
		camera->setErrorCallback(err_cb, err_cb_cookie);
	}
}

void CupCam::startPreview(){
	LOGD_IF(DEBUG_LOG, "startPreview");
	if(hasCamera){
		camera->startPreview();
	}
}

void CupCam::stopPreview(){
	LOGD_IF(DEBUG_LOG, "stopPreview");
	if(hasCamera){
		camera->stopPreview();
	}
}

}//namespace

Let’s continue with the JNI glue Native.java and Native.cpp.

Native.java


package com.example;

import android.content.Context;

public class CopyOfNative {
	static {
		System.loadLibrary("Native");
	}

	public static native void initCamera(Object surface);
	public static native void releaseCamera();
	public static native void startPreview();
	public static native void stopPreview();
}

Native.cpp


#ifndef LOG_TAG
#define LOG_TAG "Native"
#include <utils/Log.h>
#endif

#include "jni.h"
#include "JNIHelp.h"
#include "android_runtime/AndroidRuntime.h"

#include "Main.h"

using namespace android;

Main* main=NULL;

static void initCamera(JNIEnv *env, jobject thiz, jobject jSurface){
	jclass surfaceClass = env->FindClass("android/view/Surface");
	jfieldID surfaceField = env->GetFieldID(surfaceClass, "mSurface", "I");

	int* surface = (int*)(env->GetIntField(jSurface, surfaceField));

	main->initCamera(surface);
}

static void releaseCamera(JNIEnv *env, jobject thiz){
	main->releaseCamera();
}

static void startPreview(JNIEnv *env, jobject thiz){
	main->startPreview();
}

static void stopPreview(JNIEnv *env, jobject thiz){
	main->stopPreview();
}

//Path to the Java part of the jni glue, e.g., com/example/Native
static const char *classPathName = "your/path/to/Native";
static JNINativeMethod methods[] = {
	{ "releaseCamera", "()V", (void*) releaseCamera },
	{ "initCamera", "(Ljava/lang/Object;)V", (void*) initCamera },
	{ "startPreview", "()V", (void*) startPreview },
	{ "stopPreview", "()V", (void*) stopPreview }
};

//Register several native methods for one class.
static int registerNativeMethods(JNIEnv* env, const char* className,
		JNINativeMethod* gMethods, int numMethods) {
	jclass clazz = env->FindClass(className);
	if (clazz == NULL) {
		LOGE_IF(DEBUG_LOG,"Native registration unable to find class '%s'", className);
		return JNI_FALSE;
	}
	if (env->RegisterNatives(clazz, gMethods, numMethods) < 0) {
		LOGE_IF(DEBUG_LOG,"RegisterNatives failed for '%s'", className);
		return JNI_FALSE;
	}
	return JNI_TRUE;
}

//Register native methods for all classes we know about.
static int registerNatives(JNIEnv* env) {
	if (!registerNativeMethods(env, classPathName, methods, sizeof(methods)
			/ sizeof(methods[0]))) {
		return JNI_FALSE;
	}
	return JNI_TRUE;
}

//This is called by the VM when the shared library is first loaded.
jint JNI_OnLoad(JavaVM* vm, void* reserved) {
	JNIEnv* env = NULL;
	jint result = -1;

	if (vm->GetEnv((void**) &env, JNI_VERSION_1_4) != JNI_OK) {
		LOGE_IF(DEBUG_LOG,"ERROR: GetEnv failed\n");
		goto bail;
	}
	assert(env != NULL);

	if (registerNatives(env) < 0) {
		LOGE_IF(DEBUG_LOG,"ERROR: native registration failed\n");
		goto bail;
	}

	/* success -- return valid version number */
	result = JNI_VERSION_1_4;

	main = new Main();

	bail: return result;
}

The CupCam and the JNI glue assumes a few things about your code. First, the heavy processing on the camera frames will take place at Main.cpp. Second, you have to send “down” a surface object through Native.initCamera() on which the Camera preview frames will be shown.

Let’s see a dummy implementation for some functions in the Main class.


static void rec_callback(void* mem, void* cookie){
	Main* c = (Main*) cookie;
	c->recordingCallback(mem);
}

void Main::recordingCallback(void* mem){
	tUint8 *memBuf = (tUint8 *) mem;
	memcpy(buf, memBuf, BUFLENGTH);
	//do some stuff
}

void Main::releaseCamera(){
	CupCam->releaseCamera();
}

void Main::initCamera(int* surface){
	CupCam->setSurface(surface);
	CupCam->initCamera();
}

void Main::startPreview(){
	CupCam->startPreview();
}

void Main::stopPreview(){
	CupCam->stopPreview();
}

Starting and stopping the preview callbacks are best left to the Surface object. Some events can be controlled only in the Java level. A simple implementation for the CameraPreview.java could be the following.

CameraPreview.java


class CameraPreview extends SurfaceView implements SurfaceHolder.Callback {
	private final static String TAG = "CameraPreview";

	protected static SurfaceHolder mHolder;

	CameraPreview(Context context) {
		super(context);
		init();
	}

	public CameraPreview(Context context, AttributeSet attrs) {
		super(context, attrs);
		init();
	}

	private void init() {
		setFocusable(true);
		mHolder = getHolder();
		mHolder.addCallback(this);
		mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
	}

	public void surfaceCreated(SurfaceHolder holder) {
		Native.initCamera(mHolder.getSurface());
		Native.startPreview();
	}

	public void surfaceDestroyed(SurfaceHolder holder) {
			Native.stopPreview();
			Native.releaseCamera();
	}

	public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
		//empty
	}
}

All we need is to compile the native codes into a dynamic library. For this we would need an Android.mk makefile.

Android.mk


LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)

LOCAL_ARM_MODE := arm

LOCAL_SRC_FILES := \
native.cpp \
Main.cpp \
CupCam.cpp

LOCAL_MODULE := libNative

LOCAL_C_INCLUDES := \
$(LOCAL_PATH) \
$(LOCAL_PATH)/includes \
$(ANDR_ROOT)/frameworks/base/camera/libcameraservice \
$(ANDR_ROOT)/frameworks/base/include/media \
$(ANDR_ROOT)/frameworks/base/include/binder \
$(ANDR_ROOT)/frameworks/base/include/utils

LOCAL_CFLAGS += -w -Wno-trigraphs -Wreturn-type -Wunused-variable -std=gnu99 -fPIC -O3 -fno-strict-aliasing -Wno-write-strings 

LOCAL_MODULE_SUFFIX := $(HOST_JNILIB_SUFFIX)

LOCAL_SHARED_LIBRARIES := \
	libandroid_runtime \
	libnativehelper \
	libcutils \
	libutils \
	libcameraservice \
	libmedia \
	libdl \
	libui \
	liblog \
    libicuuc \
    libicui18n \
    libsqlite
	
LOCAL_C_INCLUDES += $(JNI_H_INCLUDE)
LOCAL_LDLIBS := -lpthread -ld

LOCAL_PRELINK_MODULE := false

include $(BUILD_SHARED_LIBRARY)

Compiling code in the Android source is as easy as using make.


$ cd cupcake_source/external/mycam/
$ source ../../build/envsetup.sh
$ mm

The compiled library can be found at cupcake_source/system/lib/libNative.so. You have to copy this so to your Android project’s /libs/armeabi/ folder. If there are no such folders, create them as you would do if you were using Android NDK.

The code is not ready for release. The biggest problem that it works only on 1.5 OS. I’ll make it much more smarter in the following blog posts.