Featured post

Don’t tie your engineers hands

This blog post is about how companies waste a lot of time and money by forcing a bad method of working on their mobile engineers.

It usually starts with the old and seemingly wise observation that writing code twice is bad. Now some managers stop right here and make this a mantra for the company. By which they hurt their own business. I’ll explain why and show that implementing the “same” feature twice is sometimes the best thing one could do.

I’ll walk through some common scenarios that comes up at companies, like they have an iOS app and they want to port it to Android ASAP. At the same time I’ll probably rant about my previous bad experiences.:)

At first a little about me. I started working on Android in 2009, and on iOS in 2010. I use mostly C++, Objective-C and Java on mobile. I worked both on apps and mobile games.

1. Don’t use the same UI on all platforms

This isn’t that common nowadays, but almost every company made this mistake when they started working on Android. Usually they had an already successful iOS app and they wanted to release an Android version. Android was nowhere near as important for them as iOS, so they wanted to cut corners wherever they could. They started by reusing the design of the iPhone app. A phone is a phone, what harm could it do if they look the same? Well, as it turns out plenty.

Let’s first look at it from the high level. The problem is that every mobile platform has different UI patterns. Users gets accustomed to their chosen device’s UI. If an app uses a different pattern they will get confused or even get angry. One good example is when there is an iOS Back button in an Android app, and to make it worse it does not respond to the hardware back button (which is present on every Android device). This often means an instant uninstall.

But this choice will affect the engineers work too. If a UI pattern is not present on one of the platforms, then they have to implement it from scratch. It means wasted time, and bugs. But even when they have a seemingly similar Widget on both platforms, subtle differences in them could cause a lot of problem. I’ll talk about an example in detail below, but before that I’ll mention an exception.

Exception to this rule

If the existing app already has a unique UI then it perfectly makes sense to use the same design on both platforms.

Personal experience

I started working at Ustream in 2009. I spent almost 4 years there. We made the same mistake as above. Our first Android app used iOS buttons. We didn’t make all the mistakes that can be made though, like not responding to Android buttons, but it still looked liked an iOS app. I actually loved that first version.:) Because the Android UI was super ugly back then compared to iOS.

Half a year later they split the Android team, and I became the lead of one of them. We worked on the core apps and the other team started to work on a new ambitious product. Their product manager wanted to use the same UI patterns on both iOS and Android. He knew and liked iOS, so naturally this meant tons of extra work for the Android devs.

Where this approach failed miserably was the TabBar (TabActivity) widget. Android had a TabBar just like iOS. Seemingly the only difference between them was that the buttons were at the top on Android. As it turned out the Android implementation was so horrible, it was unsuitable to be used in a real app. Google get rid of that view later. There is a better one on Android now.

When the engineers realised this they tried to convince the product manager to use a different pattern on Android for this particular feature. They didn’t succeed. So they ended up reimplementing an iOS like TabBar on Android. It turned out to be a huge task. What supposed to be a simple few days long task became a several month long job. In the end, for various reasons that product was cancelled, and the product manager has been let go.

Conclusion

Unless your app already has a very unique UI, use platform specific UI patterns. By the way, this is a prerequisite for the app to became featured in it’s app store.
It means more work for the designers, but it also means much less work for the engineers as they can use the best design patterns for the given platform.

An important fact of the mobile platforms is that they evolve very quickly. If you force to put extra layers on the APIs in any way, you’ll find that most of the time is spent on maintaining those layers. And you will cut yourself off from using the new best patterns, or the hottest new features. For these reasons I always recommend using the native APIs for app development and suggest to stay away from PhoneGap and similar offerings.

2. Feature parity should not mean source code parity (apps)

For many companies feature parity across all platforms have the highest priority. The problem is that – in my experience – most companies wants to achieve this by forcing source code parity. Like having the same classes in Java and Objective-C. Or generate code from one platform to the other. Maybe there is something wrong with me, but I never understood how and why one leads to another. :)

Personal experience

There were several mobile teams at Ustream. (Symbian, iOS, Android, Blackberry, Windows Mobile, Maemo …) Implementing Ustream on mobile was very challenging, as there was no public API on any platform for what we wanted to do. Sharing best practices and even code – when it made sense – came up constantly. There weren’t any arranged meetings to discuss these. But we were sitting close to each other and were always curious how the others solved a difficult problem on their own platform. I enjoyed working there a lot at that time.

We naturally knew that as the platforms are so alien to each other, there is little room for sharing codes between them. We had a C library that was shared between the iOS and Android teams for a few common tasks, but nothing else.
But this doesn’t mean that we didn’t port codes from one OS to the other. We did that several times, but on the other hand we found that making time to adjust the code for the given platform always worth it in the end.

Interestingly enough, it came up once that the BB team and the Android team should share the same code base, because you know … both use Java. ;) But thankfully the managers quickly rejected that idea.

BB had Java ME (based on Java 1.3), Android is based on Java 1.5, and our code base was a combination of C, C++, Java and ARM assembly. So comparing BB and Android is like comparing Java and Javascript. ;)

Conclusion

The most important thing is that the teams owned their own code. We could use the best design patterns, the best tools the platform provided to us. I can’t emphasize this enough.
Every team was as fast implementing a new feature as fast they could get. We were not hindered by adhering to anyone. But we shared our knowledge and ported code from each other when it made sense.

Exception to this rule

None.
Seriously, do yourself a favor and stay away from JS on mobile.:)

2. Feature parity should not mean source code parity (games)

Let me start with an exception here, because it’s more important.:)

Exception to this rule

If you start a game from scratch - as games usually don’t use OS specific features (or only a few) -, I think the best approach is to use a cross platform engine. I would prefer to use my own C++ engine. But Unity or the Unreal engine could be a good choice too.
LibGDX is very good for Android, and it’s cross-platform, but I personally wouldn’t use it for iOS.
I don’t recommend cocos2d-x. It’s one of the ugliest engine out there that you can find.

Porting from iOS to Android

The most common scenario is that you have a cocos2d game and you want to port it to Android.

I think there isn’t a good approach for this. There are only bad ones and super bad ones. I think the best thing you could do is to rewrite it from scratch in a good cross platform engine. That way at least the team could learn a more useful engine.

Having two separate teams, one for iOS and one for Android could work too, in that case they could use the best engine/methods on their chosen platform. Reaching feature parity can be reached easily by having the same product manager (or producer) for the teams. They would probably implement a given feature in a different time scale, but they would usually be a lot faster than if they would have been tied by shared code somehow.

And here comes a super bad choice.

Use cocos2d-x, just because it has a similar API to cocos2d.

Objective-C devs might like this at first, as cocos2d-x use similar design patterns as objective-c. But as they get to know C++, they would either start hating C++ for not having a dynamic runtime or would start hating cocos2d-x for being the ugliest C++ engine out there. Did I mention that it is also slow? :)

Reaching feature parity would be the least of your problems. Because you would encounter a scenario where one thing works in cocos2d, but it doesn’t work in cocos2d-x. Then you either have to hold back the cocos2d team, or you would have to release a lower quality game on iOS. I think no sane game dev wants to use cocos2d-x the second time.

Porting from flash to mobile

This is another common scenario.

I don’t know ActionScript well. But I think it’s common knowledge now that it’s not good for mobile. I think most companies doesn’t think to reuse ActionScript code on mobile. When this time comes in a company’s life it is best to hire engineers with mobile experience.

Personal experience (cocos2d-x)

I worked for a few months at a game dev company. I was hired to find the best approach porting their existing iOS (cocos2d) games to Android. They already worked with an external company who ported one of their games to Android. I think many things can be learned from their mistake. They choose cocos2d-x, because well, it looks like cocos2d. This seems a reasonable move if you don’t know already cocos2d-x. After that, their approach was that every other week they manually rewrote every objective-c line to c++. This is where I threw an exception. :)

My first reaction was that I was amazed that this method worked. I mean they reimplemented half of the Foundation framework, some of Cocoa Touch, they even found a way to mimic objective-c’s runtime. I think what they achieved there is commendable. On the other hand the end result was a much slower game, than the original. The code was super ugly. The engineers did not own the code. They were more like robots, who manually rewrote every line of code to cocos2d-x. They were game devs who were forced to never made a single creative decision during work.

They worked like this to reach feature parity on both platform. The funny thing is that as far as I know they never get there. The iOS app was always several steps ahead of them.

The entire rewrite took them more time than what the iOS team needed to release the original game on iOS. I think this alone would have been enough to abandon this method forever. But the end of the story is that I couldn’t convince a manager to try out a new approach instead of this completely broken one, so I didn’t stay there long.

Conclusion

I think it’s always a bad move to tie your engineers hands in any way. I’ve yet to see a good enough reason for that. The sad truth is that most of the time when I encountered this at different companies, they didn’t reach their original goal. Sometimes rewriting a code from scratch is not the worst choice you can make. Reaching feature parity does not require to bound two source code together. By doing that you would instantly put a serious burden on every engineer in your team, with little hope to achieve the original goal. Sharing knowledge between teams is much more important than putting artificial chains on them.

Entity Systems in cocos2d-x – Part 1

I’ve fallen in love with Entity Systems the first time I heard about them. It’s a data-driven software architecture pattern which is very suitable for game development. I used it in a game written in Java last year and now I’ve applied it to a cocos2d-x game written in C++. I’ll share my experience about these in this blog post.

Entity Component Systems

Entity

There are three actors in this design pattern. Entities represent game objects. Your hero character is an entity, but a piece of a wall in the game is also an entity. If you write your game in an MVC style, your Entity class would probably be an abstract class with a rich set of virtual functions. But in the ECS pattern the Entity is usually just an id.

Component

The Components are the minions that differentiates the Entities. They make it possible to represent the abilities like moving, attacking, be attacked, reacting to user input etc. They are the main building blocks.

For example, you can have a PositionComponent class that stores the position of it’s entity in the game world, you can also have an AIComponent class that stores the Entity’s “AI state” etc..
You can think of the components as pure data classes (or even structs). They don’t know about other components, they are independent. They don’t contain any logic code either. The only methods they have are transformations on their own data. For example, you can have a HealthComponent class, with a method like “bool damage(unsigned amount, DamageType type);“.

System

The glue that fits the components together are the Systems. They represent the game logic. They work on a set of components, e.g. the AISystem processes the AIComponents (and maybe more).
Contrary to the component classes, they can communicate to each other. Although I prefer if they only do that through the components.
Imagine a scenario where the AISystem processes the AIComponents one after another and it decides that a game object would switch from an idle state to a moving state. At this step it would set the target tile in the entity’s PathComponent and would also set the needsProcessing flag in it, to alert the PathFindingSystem. It would also update the entity’s RenderingComponent. So the RenderingSystem can do it’s own work to make the change visible and so on.

The reason I like this approach so much is that the ECS pattern is a very rare case where not only your code will be better organised/more modular/easier to maintain, but it would even run faster.

On the downside, I think it takes much more time to come up with the first running version of your game, as you have to implement several systems and components before your game reaches the first playable state. But I also think that this will hugely pay off in the long run.

It also needs a different thinking, which takes some time to get used to.
For example when you write the attack code you can’t just write the attack code for your mage character. You have to think about every type of object that can attack. So you probably start by thinking what does attacking really mean?
You can break it up, like having an AttackComponent with data members like attackRange and DamageType and your AI code will use these and the positions as input. (You will add more of these as needed.) It can decide that the attacker is too far and pushes a move state to the AI stack on top of the attack state, so it can resume attacking once it gets near to it. The RenderingComponent can have a different animation for attack and move, but actually it won’t care what animation it needs to set, it only cares whether it needs to change the current animation. Other parts of your code can utilize particle effects. But again it won’t care why it needs to emit these particles.

Modularity

The components are usually totally independent. You can add and remove them during runtime to any entity. This results in a super flexible design. It’s easy to test different functionalities, because they are separated by design. Just for a quick example, you can add a bounding box around a game character, by adding a DebugComponent to it during runtime.

CPU friendliness

At each game loop, the engine goes through the systems once, one after another. A system processes it’s components in the same manner. The components are not stored in the entities. The same type of components are held in an array. So as the system starts processing them they are loaded into the CPU cache in the most efficient way possible which can greatly speed things up.

The game

The game I worked on is an isometric cocos2d-x game. It was originally a test given to me by a game dev company as part of the interview process. It was written originally in an MVC style. So the work I describe here started as a refactoring.

Choosing an Entity System

I used an Entity System framework before in a Java project. (It’s called Artemis. On how to integrate it with LibGDX for Android development you can check out one of my sample code here.)

I’ve decided to use a C++ port by Sidar Talei of the Artemis Entity Framework. But my only reason for this was that it’s interface was familiar to me. Both the Entity-X and the anax frameworks worth to take a look at them. Maybe I’ll make a comparison between them in a later blog post. I’ve cloned the bitbucket repo and added some important bug fixes to it. The final artemis-cpp framework I used can be downloaded from here.

Cocos2d-x

I have to make a confession first. I think cocos2d-x at it’s current state is not a very great game engine. In fact it could be an example how to not write C++ code. While I love both C++11 and Objective-C, I also think that forcing iOS patterns on C++ leads to nowhere good. Seeing the choices they made in the engine (like it’s asset management) with an Android engineer’s eyes is even more excruciating. :| So I naturally started by adding the missing std::nothrow‘s parameter to the new operators in the create() methods, just to soothe myself. ;)

Input handling

I found input handling in cocos2d-x very weird. One would assume that as there is a scene graph in place, the input handling would be tied closely to it. Well, it’s not. The input handling is totally independent of the CCNode graph. This was very counterintuitive for me and I spent a great amount of time figuring out how to do it right in the game.

As a side note, in LibGDX you don’t have a built-in scene graph, but you have InputMultiplexers and built-in GestureDetectors. Using these, it was easy to set up separate GestureListenerSystems for the HUD, the Map and the Units. You can add these Systems to the world like this. I find this code more beautiful because of it’s symmetry.


InputMultiplexer multiplexer = new InputMultiplexer();
multiplexer.addProcessor(new GestureDetector(hudInputSystem));
multiplexer.addProcessor(new GestureDetector(objectInputSystem));
multiplexer.addProcessor(new GestureDetector(mapInputSystem));
Gdx.input.setInputProcessor(multiplexer);

world.setSystem(hudInputSystem);
world.setSystem(objectInputSystem);
world.setSystem(mapInputSystem);

On the down side every call goes through the hud, as it is the topmost input receiver. It checks whether a touch is relevant to it and either handles and swallows it, or passes it forward to the next GestureListenerSystem. A sample code for this can be found here.

But let’s get back to cocos2d-x. Based on the cocos2d documentation, you have two choices for input handling. Standard or Targeted touch delegates. You can’t use both on the same CCNode, but you can mix them in separate parts of the app. As the map needed multi-touch support for zooming, but I also wanted the hud to swallow the relevant touches I ended up using both of them.

You have to remember that for this to work properly, you also have to set the CCTouchDelegate‘s priority or you could find yourself in a situation where pressing a button on the hud triggers an unseen object below it. By the way this priority is an int and apart from the fact that to get higher priority you have to go lower:), these values are even global! I think this is a great weakness in the cocos2d API. They could instead take full advantage of the Node graph here and propagate the touches up on that chain starting from the leaf nodes. That would be more intuitive and as a side-effect it would also get rid of the globalness of the priorities.

Goal

My goal was to create a TouchInputComponent which will represent that this object is touchable. It would also encapsulate the input handling code. After a bit work I was able to achieve this, well..almost.

TouchInputComponent.h


class TouchInputComponent : public artemis::Component, public cocos2d::CCTouchDelegate, public cocos2d::CCObject
{
public:
	TouchInputComponent(artemis::Entity& entity, cocos2d::CCNode* node);
	~TouchInputComponent();

	static MapTouchListener* touchListener;
	cocos2d::CCNode* node;
	artemis::Entity& entity;
	
	bool ccTouchBegan(cocos2d::CCTouch *touch, cocos2d::CCEvent *event) override;
	void ccTouchMoved(cocos2d::CCTouch *touch, cocos2d::CCEvent *event) override;
	void ccTouchEnded(cocos2d::CCTouch *touch, cocos2d::CCEvent *event) override;
	
	void start();
	void stop();
private:
	bool _touched{false};
	bool _delegateAdded{false};
	std::chrono::high_resolution_clock::time_point _tp;
};

I have a MapTouchListener class that receives the processed touches. It can get onEntityTouched and onEntityLongTouched events. In other parts of the code it can get the same events for map touches.
The component stores a node for which it belongs to according to the eye. (This is the root node of the ViewComponent.) It also stores the Entity which it belongs to. (In this implementation the components don’t know about their entities.)

TouchInputComponent.cpp


TouchInputComponent::TouchInputComponent(artemis::Entity& entity, cocos2d::CCNode* node) : entity(entity), node(node)
{
	CC_SAFE_RETAIN(node);
	start();
}
TouchInputComponent::~TouchInputComponent()
{
	stop();
	CC_SAFE_RELEASE(node);
}
bool TouchInputComponent::ccTouchBegan(cocos2d::CCTouch *touch, cocos2d::CCEvent *event)
{
	CCPoint p = node->convertTouchToNodeSpace(touch);
	CCRect rect(0.0f, 0.0f, node->getContentSize().width, node->getContentSize().height);
	_touched = rect.containsPoint(p);
	if (_touched) _tp = std::chrono::high_resolution_clock::now();
	return _touched;
}
void TouchInputComponent::ccTouchMoved(cocos2d::CCTouch *touch, cocos2d::CCEvent *event)
{
	if(_touched && !touch->getDelta().equals(CCPointZero))
	{
		_touched = false;
	}
}
void TouchInputComponent::ccTouchEnded(cocos2d::CCTouch *touch, cocos2d::CCEvent *event)
{
	if(_touched)
    {
		if (std::chrono::duration_cast(std::chrono::high_resolution_clock::now() - _tp).count() >= 500) {
			TouchInputComponent::touchListener->onEntityLongTouched(entity, node->getZOrder());
		} else {
			TouchInputComponent::touchListener->onEntityTouched(entity, node->getZOrder());
		}
		_touched = false;
    }
}
void TouchInputComponent::start()
{
	if (_delegateAdded) return;
	
	CCDirector::sharedDirector()->getTouchDispatcher()->addTargetedDelegate(this, static_cast(TouchPriority::Entity), false);
	_delegateAdded = true;
}
void TouchInputComponent::stop()
{
	if (_delegateAdded) {
		CCDirector::sharedDirector()->getTouchDispatcher()->removeDelegate(this);
		_delegateAdded = false;
	}
}

And here you can see why I failed encapsulating the input handling in the component. Sadly swallowing the touch events was not an option for me. The problem is that you have to decide whether you swallows a touch or not at ccTouchBegan() and you can't change your mind later. Which would happen a lot in case your gesture gets invalidated. But not swallowing a relevant touch results in that you need to have a global code somewhere where every kind of input is propagated and checks for which input is valid. Which means a few "bool ignoreNextTouch" kind of code.

In my case the global input handler class was the InputSystem. This system doesn't process any components directly. (Which is totally valid.) It gets it's input asynchronously and stores the last of them. This is not a restriction as it processes the input in every game loop if it needs to.
Using the z order it figures out which entity was most likely intended to be touched and propagates this event to the other systems through their components.

InputSystem.h


/**
 * InputSystem
 *
 * The system is responsible handling input (touches) from the player. It does NOT handle the HUD buttons and the pan and zoom gestures on the map.
 *
 * The System is an "empty" System in the sense that it does not operate directly on components.
 *
 * It handles only one touch event in a loop and discards others. This doesn't seem to be a big limitation.
 */
class InputSystem : public artemis::EntitySystem , public MapTouchListener {
public:
	InputSystem();
	
	void onEntityTouched(artemis::Entity&, int zOrder) override;
	void onEntityLongTouched(artemis::Entity&, int zOrder) override;
	void onMapTouchedAtTile(const MapTile&) override;
	void onMapLongTouchedAtTile(const MapTile&) override;
protected:
	virtual void begin() override;
	virtual void processEntities(artemis::ImmutableBag<artemis::Entity*>&) override;
	virtual void end() override;
	virtual bool checkProcessing() override;
private:
	artemis::Entity* _touchedEntity{nullptr};
	MapTile _touchedTile;
	std::set<artemis::Entity*> _selectedEntities;
	int _lastZOrder{-10000};
	bool _entityTouched{false};
	bool _mapTouched{false};
	bool _longTouched{false};
	bool _checkProcessing{false};
	
	bool _ignoreNextMapTouch{false};
	
	inline void clearFlags();
	
	void collectSelectionsAroundTile(const MapTile&);
	void selectEntity(artemis::Entity*);
	void unSelectEntity(artemis::Entity*);
	std::set<artemis::Entity*>& getSelectedEntities();
	void attackEntity(artemis::Entity*);
	void moveUnitsTo(const MapTile&, const std::set&);
};

void InputSystem::clearFlags()
{
	_checkProcessing = false;
	_entityTouched = false;
	_longTouched = false;
	_mapTouched = false;
}

The interesting parts of InputSystem.cpp follows.

InputSystem.cpp


void InputSystem::begin()
{
	//empty
}
void InputSystem::processEntities(ImmutableBag& bag)
{
	using std::string;

	if (_longTouched) {
		if (_entityTouched) {
			//_touchedTile = ...;
		}
		collectSelectionsAroundTile(_touchedTile);
		return;
	}
	
	if (_entityTouched) {
		GroupManager* gMan = world->getGroupManager();
		if (gMan->isInGroup(string(EntityGroup::ENEMY), *_touchedEntity)) {
			attackEntity(_touchedEntity);
		} else if (gMan->isInGroup(string(EntityGroup::ALLY), *_touchedEntity)) {
			selectEntity(_touchedEntity);
		} //no else branch!
		return;
	}
	
	if (_mapTouched) {
		moveUnitsTo(_touchedTile, _selectedEntities);
		return;
	}
}
void InputSystem::end()
{
	clearFlags();
	_lastZOrder = -100000;
}

Summary

It is possible to separate the input handling from the rest of the code. One important thing which I forgot to mention earlier is that I was able to eliminate the need to inherit from CCNode (or it's cousins) too, which was not apparent at the beginning. The most important step to get rid of the view classes was to separate the input handling from them.
I couldn't swallow the touches because it interfered with gesture detection. The only part where it worked was the HUD buttons. Which is a victory in itself. :) Just make sure that you set the touch priority on the HUD very low (to make it very high). :)

I'll introduce a greater part of the code in the following blog posts.

Optimizing Access to Raw Camera Frames on Android – Part 3.

This is the final post in this series. In the previous posts I have shown a way to gain access to the raw camera frames by using private APIs. As it is very dangerous to use hidden APIs, extra care must be taken during error handling. As I mentioned in the first post, there is no need for hacks on Android 2.2 and above, as there is an improved setPreviewCallbackWithBuffer method there that makes all these hacks unnecessary.

We are done with 1.5 and 1.6 versions. All that is left is 2.0 and 2.1. The good news is that 2.0 is basically extinct so we don’t have to worry about it. The even better news is that the improved previewCallback function can be used on 2.1 versions too! It was already there in the public APIs, but was hidden. So we have to use reflection to reach it.

Before starting to write a single line of code, let’s think for a second about what is the best way to add this new functionality to the app. In the previous posts we followed the pattern that most of the “logic” was in the native parts of the code. The camera handling was managed from the native code. But this new API is in Java. So if we want to keep this logic to not mess up the existing code base, we have to find a way to add this new type of camera handling to the LoadManager class. One way to do achieve this, is to create a fake native Camera class – that implements MyICam – and implement their methods as that they do nothing else just calling back to the Java layer.

Let’s start with the CameraPreview.java class. This is where we would use the new camera API.

CameraPreview.java


class CameraPreview extends SurfaceView implements SurfaceHolder.Callback {
        private final static String TAG = "CameraPreview";

	private static Method addCBBuffer=null;
	private static Method setPreviewCB=null;
	private static boolean hasNewCameraApi = false;
	private static int bufSize = 115200;
	private static PreviewCallback cb = null;

	protected static SurfaceHolder mHolder;
	private static Camera camera=null;

	static {
		checkForNewCameraApi();
	};

	private static void checkForNewCameraApi() {
		try {
			Class clazz = Class.forName("android.hardware.Camera");
			addCBBuffer = clazz.getMethod("addCallbackBuffer", new Class[]{ byte[].class });
			setPreviewCB = clazz.getMethod("setPreviewCallbackWithBuffer", new Class[]{ PreviewCallback.class });
			hasNewCameraApi = true;
		} catch (ClassNotFoundException e) {
			Log.e(TAG, "Can't find android.hardware.Camera class");
			e.printStackTrace();
		}  catch (NoSuchMethodException e) {
			hasNewCameraApi = false;
		}
	}

	private static boolean addCallbacks(){
    	if(hasNewCameraApi){
    		PixelFormat p = new PixelFormat();
    		PixelFormat.getPixelFormatInfo(PixelFormat.YCbCr_420_SP, p);
    		byte[] buffer1 = new byte[bufSize];
    		byte[] buffer2 = new byte[bufSize];
    		byte[] buffer3 = new byte[bufSize];
    		try {
				addCBBuffer.invoke(camera, buffer1);
				addCBBuffer.invoke(camera, buffer2);
				addCBBuffer.invoke(camera, buffer3);
			} catch (IllegalArgumentException e) {
				Log.e(TAG, "..." , e);
				return false;
			} catch (IllegalAccessException e) {
				Log.e(TAG, "..." , e);
				return false;
			} catch (InvocationTargetException e) {
				Log.e(TAG, "..." , e);
				return false;
			}
			return true;
    	}
    	return false;
	}

//these will be called from native code only
    public static void initCameraFromNative(){
    	camera = Camera.open();
    	if (camera!=null) {
	    	try {
				camera.setPreviewDisplay(mHolder);
			} catch (IOException e) {
				Log.e(TAG, "..." , e);
			}
	    	Camera.Parameters parameters = camera.getParameters();
	    	parameters.setPreviewSize(320, 240);
	    	parameters.setPreviewFormat(PixelFormat.YCbCr_420_SP);
	    	parameters.setPreviewFrameRate(15);
	    	camera.setParameters(parameters);
    	}
   }
   public static void releaseCameraFromNative(){
    	if (camera != null){
    		camera.release();
    		camera = null;    		
    	}
    }

    //true means: start
    //false means: stop
    public static void setRecordingCallbackFromNative(boolean action){
    	if(action){
    		if(!addCallbacks()){
    			hasNewCameraApi = false;
    		}
    		if(hasNewCameraApi){
    			try {
					setPreviewCB.invoke(camera, new Object[]{
							(cb = new PreviewCallback() {

								@Override
								public void onPreviewFrame(byte[] data, Camera camera) {
data.length);
									Native.previewCallback(data);
									try {										
										addCBBuffer.invoke(camera, data);
									} catch (IllegalArgumentException e) {
										Log.e(TAG, "..." , e);
									} catch (IllegalAccessException e) {
										Log.e(TAG, "..." , e);
									} catch (InvocationTargetException e) {
										Log.e(TAG, "..." , e);
									}
								}
							}) });
				} catch (IllegalArgumentException e) {
					Log.e(TAG, "..." , e);
					hasNewCameraApi = false;
				} catch (IllegalAccessException e) {
					Log.e(TAG, "..." , e);
					hasNewCameraApi = false;
				} catch (InvocationTargetException e) {
					Log.e(TAG, "..." , e);
					hasNewCameraApi = false;
				}
				//fallback to the old setPreviewCallback
				if(!hasNewCameraApi){
					camera.setPreviewCallback(new Camera.PreviewCallback() {

	    				@Override
	    				public void onPreviewFrame(byte[] data, Camera camera) {
	    					Native.previewCallback(data);
	    				}
	    			});
				}
    		} else{
			//old setPreviewCallback
    			camera.setPreviewCallback(new Camera.PreviewCallback() {

    				@Override
    				public void onPreviewFrame(byte[] data, Camera camera) {
    					Native.previewCallback(data);
    				}
    			});
    		}
    	} else {
    		camera.setPreviewCallback(null);
    	}
    }

//...other methods

The class above checks for the existence of the new Preview callback API in its static constructor. If it founds the new APIs, then it it will try to use them instead of the old APIs. All of these methods make sense only if we keep in mind that we want to control these events from native code. All functions ending in …FromNative will be called from native code. As it can be seen, the code will fall back to the old public APIs if something goes wrong with the new one. This might or might not what you want in your app.

We have to create a native Camera class that calls back to Java on Android 2.1 and above. Let’s call this class JCamera. It is very easy to implement, because it does nothing else just calls back to the appropriate method in CameraPreview.java. So let’s take a look instead at the changes of LoadManager.cpp.

LoadManager.cpp


MyICam* LoadManager::createCam(){
	char *error=NULL;
	if(fallback){
		return new JCamera;
	} else{
		if(!handle){
			char dir[150];
			switch(SDK){
			case -1:
				LOGE("SDK is not set!");
				fallback = true;
				return new JCamera;
				break;
			case 3:
				snprintf(dir, 150, "%s/lib/libcupcakecam.so",dataDir);
				handle = dlopen(dir, RTLD_LAZY);
				if(!handle){
					fallback = true;
					return new JCamera;
				}
				break;
			case 4:
				snprintf(dir, 150, "%s/lib/libdonutcam.so",dataDir);
				handle = dlopen(dir, RTLD_LAZY);
				if(!handle){
					fallback = true;
					return new JCamera;
				}
				break;
			}

			createCamera = (createCamera_t*) dlsym(handle, "create");
			const char* dlsym_error = dlerror();
			if(dlsym_error){				
				fallback = true;
				handle = NULL;
				return new JCamera;
			}

			destroyCamera = (destroyCamera_t*) dlsym(handle, "destroy");
			dlsym_error = dlerror();
			if (dlsym_error) {
				fallback = true;
				handle = NULL;
				return new JCamera;
			}
		}
		return createCamera();
	}
}

void LoadManager::destroyCam(MyICam* iCam){
	if(fallback){
		delete iCam;
	} else{
		destroyCamera(iCam);
	}
}

Summary

The problem was to find a better way to get access to the preview frames from the camera than what the official API offered, as it had a major flaw under Android 2.2.
The solution was to use private APIs, specific to 1.5 and 1.6 versions, and to use reflection on 2.1. To avoid polluting the code base with dangerous private API calls we used the bridge pattern and dynamic loading to separate them into their own so files. The camera handling was managed from the native code at all steps even on 2.1 and above versions. This made available that the main code base remained untouched.

Optimizing Access to Raw Camera Frames on Android – Part 2.

In the previous post I have shown a way how to utilize the private Camera API on Android 1.5. If you run that code on later Android OS versions it will crash. If you don’t want to release a separate app for all OS versions – which would be very ugly – you have to solve this problem somehow. At first let’s check what has changed on Android 1.6 compared to 1.5. You can browse the source code at the time of this writing on GitHub. The file you should check out first is ui/Camera.h.
One major part that changed is the callback handling. The recording/preview callbacks from 1.5 have been moved to a separate CameraListener class on 1.6. There are other changes too, but those don’t seem to be that severe.

Let’s assume that we were able to utilize the 1.6 Camera API too, just as we did with on 1.5. There are still some problems after that. First, it would be good if we could put the camera handling codes under a common interface and the rest of our code won’t have to bother about on which Android version it will run. Secondly, and this one is more important than the previous one, we have to find a way to separate the private API calls from the main codebase. Because System.loadLibrary() could fail on us if our code can call both 1.5 and 1.6 codes.

These problems can be solved by applying the Wrapper/Adapter design pattern for the first one, and applying the Bridge design pattern combined with dynamic loading for the second one. So we create a camera interface, e.g. let’s call it MyICam. It will be capable of encapsulating the common methods that can be called on a Camera – like startPreview() -, it will have it’s own error messages, camera flags and constants that can serve both the 1.5 and 1.6 APIs. Next we create the OS specific CupCam and DonutCam classes that are implementing the MyICam interface. We compile these into two separate dynamic libraries, the CupCam in the 1.5 Android source, the DonutCam in the 1.6 Android source. And at last we create a manager class that loads the right camera class for the current OS it runs on and returns a MyICam pointer hiding the concrete implementation from the rest of the code.

Let’s start with a possible implementation for the interface MyICam.

MyICam.h.


#ifndef MYICAM_H
#define MYICAM_H

#include <ui/Surface.h>

namespace android{

typedef void (*frame_cb)(void* mem, void *cookie);
typedef void (*error_cb)(int err, void *cookie);
typedef void (*autofocus_cb)(bool focused, void *cookie);

class MyICam{

public:
	//flags
	static const int F_JUSTPREVIEW = 0x00;
	static const int F_CAMCORDER = 0x01;
	static const int F_CAMERA = 0x05;
	//errors
	static const int M_NO_ERROR = 0;
	static const int M_DEAD_OBJECT = -32;
	static const int M_UNKNOWN_ERROR = 0x80000000;

	static const int PREVIEW_FORMAT_YUV420SP = 0x01;
	static const int PREVIEW_FORMAT_YUV420P = 0x02;

	int previewFormat;

	virtual ~MyICam(){};
	virtual void setSurface(int* surface)=0;
	virtual bool initCamera()=0;
	virtual void releaseCamera()=0;
	virtual void setRecCallback(frame_cb cb, void* cookie, int flag=F_CAMERA)=0;
	virtual void setErrCallback(error_cb ecb, void* cookie)=0;
	virtual void setAutoFocusCallback(autofocus_cb cb, void* cookie)=0;
	virtual void startPreview()=0;
	virtual void stopPreview()=0;
	virtual void autoFocus()=0;

	int getPreviewFormat() {return previewFormat;}
	void setPreviewFormat(int f) {previewFormat=f;}

};

// the types of the class factories
// These must be implemented on the name of "create" and "destroy"
typedef MyICam* createCamera_t();
typedef void destroyCamera_t(MyICam*);

}//namespace

#endif /* MYICAM_H */

The interface has a rich set of features, even some of which you don’t necessarily need in your project, like autofocus handling. Most device’s camera returns frames in YUV420SP format, so you could erase those too. The narrower the feature set of the interface, the less work you have to do to implement them in the concrete classes.
The last two typedefs could be confusing at first. They are needed because dlopen() and dlsym() are inherently C APIs and can’t deal with c++ objects. We have to implement two C functions – create and destroy -, that can be loaded by dlsym(). They will be responsible to create and delete the c++ camera objects for us.

Let’s see an implementation for DonutCam.h and DonutCam.cpp.

DonutCam.h


#ifndef DONUTCAM_H
#define DONUTCAM_H

#include "MyICam.h"

#include "android_runtime/AndroidRuntime.h"

#include "IMemory.h"
#include <ui/Surface.h>
#include "ui/Camera.h"

namespace android{

struct CamContext{
	frame_cb rec_cb;
	void* rec_cb_cookie;
};

struct AutoFocusContext{
	autofocus_cb af_cb;
	void* af_cb_cookie;
};

class DonutCam : public MyICam {
	bool hasCamera;
	bool hasListener;
	sp<Camera> camera;
	sp<Surface> mSurface;
	error_cb err_cb;
	void* err_cb_cookie;
	int rec_flag;
	CamContext* context;
	AutoFocusContext* aFContext;
	autofocus_cb autoFocusCallback;

public:
	DonutCam();
	virtual ~DonutCam();
	virtual void setSurface(int* surface);
	virtual bool initCamera();
	virtual void releaseCamera();
	virtual void setRecCallback(frame_cb cb, void* cookie, int flag=F_CAMERA);
	virtual void setErrCallback(error_cb ecb, void* cookie);
	virtual void setAutoFocusCallback(autofocus_cb cb, void* cookie);
	virtual void startPreview();
	virtual void stopPreview();
	virtual void autoFocus();
};

extern "C" MyICam* create() {
    return new DonutCam;
}

extern "C" void destroy(MyICam* iCam) {
    delete iCam;
}

}//namespace

#endif /* DONUTCAM_H */

At next, let’s see the interesting parts of DonutCam.cpp, e.g. where it differs most from CupCam.cpp.

DonutCam.cpp


namespace android{

class MyCamListener: public CameraListener
{
public:
    MyCamListener();
    ~MyCamListener() {release();}
    void setRecCallback(CamContext* context);
    void setAutoFocusCallback(AutoFocusContext* context);
    virtual void notify(int32_t msgType, int32_t ext1, int32_t ext2);
    virtual void postData(int32_t msgType, const sp<IMemory>& dataPtr);
    virtual void postDataTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& dataPtr);
    void release();

private:
    void copyAndPost(JNIEnv* env, const sp& dataPtr, int msgType);

    jobject     mCameraJObjectWeak;     // weak reference to java object
    jclass      mCameraJClass;          // strong reference to java class
    sp  mCamera;                // strong reference to native object
    CamContext* mContext;
    AutoFocusContext* mAFContext;
};

MyCamListener::MyCamListener(){	
	mContext = NULL;
	mAFContext = NULL;
}
void MyCamListener::setRecCallback(CamContext* context) {
	mContext = context;
}
void MyCamListener::setAutoFocusCallback(AutoFocusContext* context) {
	mAFContext = context;
}
void MyCamListener::notify(int32_t msgType, int32_t ext1, int32_t ext2){
	if (mAFContext && msgType == CAMERA_MSG_FOCUS) {
		if (mAFContext->af_cb){
			mAFContext->af_cb(ext1, NULL);
		}
	}
}
void MyCamListener::postDataTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& dataPtr){	
	postData(msgType, dataPtr);
}
void MyCamListener::postData(int32_t msgType, const sp<IMemory>& dataPtr){
	if ((dataPtr != NULL) && (mContext != NULL) && (mContext->rec_cb != NULL)) {
		ssize_t offset;
		size_t size;
		sp<IMemoryHeap> heap = dataPtr->getMemory(&offset, &size);
		unsigned char* caaInBuf = ((unsigned char*)heap->base()) + offset;
		mContext->rec_cb(caaInBuf, mContext->rec_cb_cookie);
	}
}

sp<CamListener> listener;

//rest of DonutCam.cpp ...

The rest of the CupCam and DonutCam codes are not very interesting after the previous post.

As I mentioned above we need a some kind of manager class that loads the appropriate camera object for the given OS version. Let’s call this class LoadManager.

LoadManager.h


//...includes
#include "MyICam.h"

namespace android{

class LoadManager {
	int SDK;
	char* dataDir;
public:
	LoadManager();
	~LoadManager();

	void setSDKVersion(int sdk);
	void setDataDir(const char* datadir);

	MyICam* createCam();
	void destroyCam(MyICam*);
};

}//namespace

LoadManager.cpp


namespace android {

createCamera_t* createCamera;
destroyCamera_t* destroyCamera;
void* handle = NULL;

void LoadManager::setSDKVersion(int sdk) {
	this->SDK = sdk;
}

MyICam* LoadManager::createCam(){
	char *error=NULL;
	if(!handle){
		char dir[150];
		switch(SDK){
		case 3:
			snprintf(dir, 150, "%s/lib/libcupcam.so",dataDir);
			handle = dlopen(dir, RTLD_LAZY);
			if(!handle){
				//Need to handle it later!
			}
			break;
		case 4:
			snprintf(dir, 150, "%s/lib/libdonutcam.so",dataDir);
			handle = dlopen(dir, RTLD_LAZY);
			if(!handle){
				//Need to handle it later!
			}
			break;
		}

		createCamera = (createCamera_t*) dlsym(handle, "create");
		const char* dlsym_error = dlerror();
		if(dlsym_error){
			//Need to handle it later!
		}

		destroyCamera = (destroyCamera_t*) dlsym(handle, "destroy");
		dlsym_error = dlerror();
		if (dlsym_error) {
			//Need to handle it later!
		}
	}
	return createCamera();
}


void LoadManager::destroyCam(MyICam* iCam){
	destroyCamera(iCam);
}

//...rest of LoadManager

Now we have everything that is needed to support both 1.5 and 1.6 versions at the same time within one application. We still need to compile the CupCam and DonutCam sources into separate so-s, but it’s an easy task after the previous post. We didn’t handle if something goes wrong during dlopen() or dlsym(), but it is not very apparent what we could do in that case. I’ll cover it in the following blog post.

Optimizing Access to Raw Camera Frames on Android – Part 1.

If you want to do some real time processing on the raw camera frames, e.g., you are working on an augmented reality app, then ensuring a high fps rate is essential. If you also want to support the 1.5 OS version of Android named cupcake, then you will quickly find that the camera API has a major flaw. On OS versions below 2.2 you are forced to use the old setPreviewCallback function. The problem with this method is that it allocates a new buffer for every preview frame, copies the frame to this buffer and gives it back in a callback. Apart from allocating memory is generally slow, the gc have to be called constantly to clean up the buffers. Setting the fps to 15 and the preview frame to the relative low resolution CIF format (352*288) will result in 2280960 byte allocation in every second. If we take a G1 phone with 192MB RAM – from which only 15-20MB can be used by a given application – it makes clear that this API can be very limiting.

I’ll show a method that bypasses the public preview callback API using private APIs. Using private APIs in production code is dangerous as it can broke on devices you have never saw or after a firmware update. I’ll also show some methods to minimize this risk by implementing fallback mechanisms in later blog posts.

You will need a linux or mac to build the Android source code yourself. Building the source codes on mac is a bit tricky. To get the source go to source.android.com. After the download, build it with make -j2. If this is the first time you see the Android source you should definitely explore it a bit. (***At the time of this writing the android source code can’t be downloaded because the kernel.org has been hacked a few weeks before. Try alternate download locations, maybe here.)

In the following sections I’ll create a small C++ app that utilizes the private Camera APIs. It would help a lot if you are familiar with the Android NDK (especially with JNI), however the app will be built inside the Android source and not with the NDK.

First create a directory under cupcake_src/external/ and name it mycam. After that create the following files CupCam.h, CupCam.cpp, native.cpp, Main.h, Main.cpp and Android.mk.

CupCam.h.


#ifndef CUPCAM_H
#define CUPCAM_H

//352*288*3/2=152064
#define BUFLENGTH 152064

//STRING8
#include "jni.h"
#include "JNIHelp.h"
#include "android_runtime/AndroidRuntime.h"

#include "utils/IMemory.h"
#include <ui/Surface.h>
#include "ui/Camera.h"

namespace android{

typedef void (*frame_cb)(void* mem, void *cookie);
typedef void (*error_cb)(int err, void *cookie);

struct CamContext{
	frame_cb rec_cb;
	void* rec_cb_cookie;
};

class CupCam {
	bool hasCamera;
	sp<Camera> camera;
	sp<Surface> mSurface;
	error_cb err_cb;
	void* err_cb_cookie;
	int rec_flag;
	CamContext* mCamContext;

public:
	CupCam();
	virtual ~CupCam();
	virtual void setSurface(int* surface);
	virtual bool initCamera();
	virtual void releaseCamera();
	virtual void setRecordingCallback(frame_cb cb, void* cookie, int flag=FLAG_CAMERA);
	virtual void setErrorCallback(error_cb ecb, void* cookie);
	virtual void startPreview();
	virtual void stopPreview();
};

}//namespace

#endif /* CUPCAM_H */

This class will handle us the Camera in native code bypassing the public Java Camera API.

CupCam.cpp


#define LOG_TAG "MyCupCam"
#include <utils/Log.h>

#define DEBUG_LOG 0

#include "CupCam.h"

namespace android{

volatile bool isDeleting=false;

void main_rec_cb(const sp<IMemory>& mem, void *cookie){

    CamContext* context = reinterpret_cast<CamContext*>(cookie);
    if(context == NULL) {
    	LOGE_IF(DEBUG_LOG,"context is NULL in main_rec_cb");
        return;
    }
    ssize_t offset;
    size_t size;
    sp<IMemoryHeap> heap = mem->getMemory(&offset, &size);
    unsigned char* inBuf = ((unsigned char*)heap->base()) + offset;

    if(!isDeleting){
    	context->rec_cb(inBuf, context->rec_cb_cookie);
    }
}

CupCam::CupCam() {
	LOGD_IF(DEBUG_LOG, "constructor");
	hasCamera = false;
	err_cb = NULL;
	err_cb_cookie = NULL;
	mCamContext = new CamContext();
	mCamContext->rec_cb = NULL;
	mCamContext->rec_cb_cookie = NULL;
	rec_flag=FRAME_CALLBACK_FLAG_NOOP;
}

CupCam::~CupCam() {
	LOGD_IF(DEBUG_LOG, "destructor");
	releaseCamera();
	if(mCamContext){
		delete mCamContext;
	}
}

void CupCam::setSurface(int* surface){
	LOGD_IF(DEBUG_LOG, "setSurface");
	mSurface = reinterpret_cast<Surface*> (surface);
}

bool CupCam::initCamera(){
	LOGD_IF(DEBUG_LOG, "initCamera");
	camera = Camera::connect();
	//make sure camera hardware is alive
	if(camera->getStatus() != NO_ERROR){
		LOGD_IF(DEBUG_LOG, "camera initialization failed");
		return false;
	}

	camera->setErrorCallback(err_cb, err_cb_cookie);

	if(camera->setPreviewDisplay(mSurface) != NO_ERROR){
		LOGD_IF(DEBUG_LOG, "setPreviewDisplay failed");
		return false;
	}

	const char* params = "preview-format=yuv420sp;preview-frame-rate=15;"
			"picture-size=355x288"
			";preview-size=355x288"
			";antibanding=auto;antibanding-values=off,50hz,60hz,auto;"
			"effect-values=mono,negative,solarize,pastel,mosaic,resize,sepia,posterize,whiteboard,blackboard,aqua;"
			"jpeg-quality=100;jpeg-thumbnail-height=240;jpeg-thumbnail-quality=90;jpeg-thumbnail-width=320;"
			"luma-adaptation=0;nightshot-mode=0;picture-format=jpeg;"
			"whitebalance=auto;whitebalance-values=auto,custom,incandescent,fluorescent,daylight,cloudy,twilight,shade";

	String8 params8 = String8(params, 510);
	camera->setParameters(params8);
	if(mCamContext->rec_cb){
		camera->setPreviewCallback(main_rec_cb, mCamContext, rec_flag);
		isDeleting=false;
	}
	hasCamera = true;
	return true;
}

void CupCam::releaseCamera(){
	LOGD_IF(DEBUG_LOG, "releaseCamera");
	if(hasCamera){
		isDeleting=true;
		camera->setPreviewCallback(NULL, NULL, FRAME_CALLBACK_FLAG_NOOP);
		camera->setErrorCallback(NULL, NULL);
		camera->disconnect();
		hasCamera = false;
	}
}

void CupCam::setRecordingCallback(frame_cb cb, void* cookie, int flag){
	LOGD_IF(DEBUG_LOG, "setRecordingCallback");
	CamContext* temp = new CamContext();
	temp->rec_cb = cb;
	temp->rec_cb_cookie = cookie;
	rec_flag=flag;
	if(hasCamera){
		if(temp->rec_cb == NULL){
			isDeleting=true;
			camera->setPreviewCallback(NULL, NULL, FRAME_CALLBACK_FLAG_NOOP);
		} else{
			camera->setPreviewCallback(main_rec_cb, temp, rec_flag);
			isDeleting=false;
		}
	}
	delete mCamContext;
	mCamContext=temp;
}

void CupCam::setErrorCallback(error_cb ecb, void* cookie){
	LOGD_IF(DEBUG_LOG, "setErrorCallback");
	err_cb=ecb;
	err_cb_cookie=cookie;
	if(hasCamera){
		camera->setErrorCallback(err_cb, err_cb_cookie);
	}
}

void CupCam::startPreview(){
	LOGD_IF(DEBUG_LOG, "startPreview");
	if(hasCamera){
		camera->startPreview();
	}
}

void CupCam::stopPreview(){
	LOGD_IF(DEBUG_LOG, "stopPreview");
	if(hasCamera){
		camera->stopPreview();
	}
}

}//namespace

Let’s continue with the JNI glue Native.java and Native.cpp.

Native.java


package com.example;

import android.content.Context;

public class CopyOfNative {
	static {
		System.loadLibrary("Native");
	}

	public static native void initCamera(Object surface);
	public static native void releaseCamera();
	public static native void startPreview();
	public static native void stopPreview();
}

Native.cpp


#ifndef LOG_TAG
#define LOG_TAG "Native"
#include <utils/Log.h>
#endif

#include "jni.h"
#include "JNIHelp.h"
#include "android_runtime/AndroidRuntime.h"

#include "Main.h"

using namespace android;

Main* main=NULL;

static void initCamera(JNIEnv *env, jobject thiz, jobject jSurface){
	jclass surfaceClass = env->FindClass("android/view/Surface");
	jfieldID surfaceField = env->GetFieldID(surfaceClass, "mSurface", "I");

	int* surface = (int*)(env->GetIntField(jSurface, surfaceField));

	main->initCamera(surface);
}

static void releaseCamera(JNIEnv *env, jobject thiz){
	main->releaseCamera();
}

static void startPreview(JNIEnv *env, jobject thiz){
	main->startPreview();
}

static void stopPreview(JNIEnv *env, jobject thiz){
	main->stopPreview();
}

//Path to the Java part of the jni glue, e.g., com/example/Native
static const char *classPathName = "your/path/to/Native";
static JNINativeMethod methods[] = {
	{ "releaseCamera", "()V", (void*) releaseCamera },
	{ "initCamera", "(Ljava/lang/Object;)V", (void*) initCamera },
	{ "startPreview", "()V", (void*) startPreview },
	{ "stopPreview", "()V", (void*) stopPreview }
};

//Register several native methods for one class.
static int registerNativeMethods(JNIEnv* env, const char* className,
		JNINativeMethod* gMethods, int numMethods) {
	jclass clazz = env->FindClass(className);
	if (clazz == NULL) {
		LOGE_IF(DEBUG_LOG,"Native registration unable to find class '%s'", className);
		return JNI_FALSE;
	}
	if (env->RegisterNatives(clazz, gMethods, numMethods) < 0) {
		LOGE_IF(DEBUG_LOG,"RegisterNatives failed for '%s'", className);
		return JNI_FALSE;
	}
	return JNI_TRUE;
}

//Register native methods for all classes we know about.
static int registerNatives(JNIEnv* env) {
	if (!registerNativeMethods(env, classPathName, methods, sizeof(methods)
			/ sizeof(methods[0]))) {
		return JNI_FALSE;
	}
	return JNI_TRUE;
}

//This is called by the VM when the shared library is first loaded.
jint JNI_OnLoad(JavaVM* vm, void* reserved) {
	JNIEnv* env = NULL;
	jint result = -1;

	if (vm->GetEnv((void**) &env, JNI_VERSION_1_4) != JNI_OK) {
		LOGE_IF(DEBUG_LOG,"ERROR: GetEnv failed\n");
		goto bail;
	}
	assert(env != NULL);

	if (registerNatives(env) < 0) {
		LOGE_IF(DEBUG_LOG,"ERROR: native registration failed\n");
		goto bail;
	}

	/* success -- return valid version number */
	result = JNI_VERSION_1_4;

	main = new Main();

	bail: return result;
}

The CupCam and the JNI glue assumes a few things about your code. First, the heavy processing on the camera frames will take place at Main.cpp. Second, you have to send “down” a surface object through Native.initCamera() on which the Camera preview frames will be shown.

Let’s see a dummy implementation for some functions in the Main class.


static void rec_callback(void* mem, void* cookie){
	Main* c = (Main*) cookie;
	c->recordingCallback(mem);
}

void Main::recordingCallback(void* mem){
	tUint8 *memBuf = (tUint8 *) mem;
	memcpy(buf, memBuf, BUFLENGTH);
	//do some stuff
}

void Main::releaseCamera(){
	CupCam->releaseCamera();
}

void Main::initCamera(int* surface){
	CupCam->setSurface(surface);
	CupCam->initCamera();
}

void Main::startPreview(){
	CupCam->startPreview();
}

void Main::stopPreview(){
	CupCam->stopPreview();
}

Starting and stopping the preview callbacks are best left to the Surface object. Some events can be controlled only in the Java level. A simple implementation for the CameraPreview.java could be the following.

CameraPreview.java


class CameraPreview extends SurfaceView implements SurfaceHolder.Callback {
	private final static String TAG = "CameraPreview";

	protected static SurfaceHolder mHolder;

	CameraPreview(Context context) {
		super(context);
		init();
	}

	public CameraPreview(Context context, AttributeSet attrs) {
		super(context, attrs);
		init();
	}

	private void init() {
		setFocusable(true);
		mHolder = getHolder();
		mHolder.addCallback(this);
		mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
	}

	public void surfaceCreated(SurfaceHolder holder) {
		Native.initCamera(mHolder.getSurface());
		Native.startPreview();
	}

	public void surfaceDestroyed(SurfaceHolder holder) {
			Native.stopPreview();
			Native.releaseCamera();
	}

	public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
		//empty
	}
}

All we need is to compile the native codes into a dynamic library. For this we would need an Android.mk makefile.

Android.mk


LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)

LOCAL_ARM_MODE := arm

LOCAL_SRC_FILES := \
native.cpp \
Main.cpp \
CupCam.cpp

LOCAL_MODULE := libNative

LOCAL_C_INCLUDES := \
$(LOCAL_PATH) \
$(LOCAL_PATH)/includes \
$(ANDR_ROOT)/frameworks/base/camera/libcameraservice \
$(ANDR_ROOT)/frameworks/base/include/media \
$(ANDR_ROOT)/frameworks/base/include/binder \
$(ANDR_ROOT)/frameworks/base/include/utils

LOCAL_CFLAGS += -w -Wno-trigraphs -Wreturn-type -Wunused-variable -std=gnu99 -fPIC -O3 -fno-strict-aliasing -Wno-write-strings 

LOCAL_MODULE_SUFFIX := $(HOST_JNILIB_SUFFIX)

LOCAL_SHARED_LIBRARIES := \
	libandroid_runtime \
	libnativehelper \
	libcutils \
	libutils \
	libcameraservice \
	libmedia \
	libdl \
	libui \
	liblog \
    libicuuc \
    libicui18n \
    libsqlite
	
LOCAL_C_INCLUDES += $(JNI_H_INCLUDE)
LOCAL_LDLIBS := -lpthread -ld

LOCAL_PRELINK_MODULE := false

include $(BUILD_SHARED_LIBRARY)

Compiling code in the Android source is as easy as using make.


$ cd cupcake_source/external/mycam/
$ source ../../build/envsetup.sh
$ mm

The compiled library can be found at cupcake_source/system/lib/libNative.so. You have to copy this so to your Android project’s /libs/armeabi/ folder. If there are no such folders, create them as you would do if you were using Android NDK.

The code is not ready for release. The biggest problem that it works only on 1.5 OS. I’ll make it much more smarter in the following blog posts.