Entity Systems in cocos2d-x – Part 1

I’ve fallen in love with Entity Systems the first time I heard about them. It’s a data-driven software architecture pattern which is very suitable for game development. I used it in a game written in Java last year and now I’ve applied it to a cocos2d-x game written in C++. I’ll share my experience about these in this blog post.

Entity Component Systems


There are three actors in this design pattern. Entities represent game objects. Your hero character is an entity, but a piece of a wall in the game is also an entity. If you write your game in an MVC style, your Entity class would probably be an abstract class with a rich set of virtual functions. But in the ECS pattern the Entity is usually just an id.


The Components are the minions that differentiates the Entities. They make it possible to represent the abilities like moving, attacking, be attacked, reacting to user input etc. They are the main building blocks.

For example, you can have a PositionComponent class that stores the position of it’s entity in the game world, you can also have an AIComponent class that stores the Entity’s “AI state” etc..
You can think of the components as pure data classes (or even structs). They don’t know about other components, they are independent. They don’t contain any logic code either. The only methods they have are transformations on their own data. For example, you can have a HealthComponent class, with a method like “bool damage(unsigned amount, DamageType type);“.


The glue that fits the components together are the Systems. They represent the game logic. They work on a set of components, e.g. the AISystem processes the AIComponents (and maybe more).
Contrary to the component classes, they can communicate to each other. Although I prefer if they only do that through the components.
Imagine a scenario where the AISystem processes the AIComponents one after another and it decides that a game object would switch from an idle state to a moving state. At this step it would set the target tile in the entity’s PathComponent and would also set the needsProcessing flag in it, to alert the PathFindingSystem. It would also update the entity’s RenderingComponent. So the RenderingSystem can do it’s own work to make the change visible and so on.

The reason I like this approach so much is that the ECS pattern is a very rare case where not only your code will be better organised/more modular/easier to maintain, but it would even run faster.

On the downside, I think it takes much more time to come up with the first running version of your game, as you have to implement several systems and components before your game reaches the first playable state. But I also think that this will hugely pay off in the long run.

It also needs a different thinking, which takes some time to get used to.
For example when you write the attack code you can’t just write the attack code for your mage character. You have to think about every type of object that can attack. So you probably start by thinking what does attacking really mean?
You can break it up, like having an AttackComponent with data members like attackRange and DamageType and your AI code will use these and the positions as input. (You will add more of these as needed.) It can decide that the attacker is too far and pushes a move state to the AI stack on top of the attack state, so it can resume attacking once it gets near to it. The RenderingComponent can have a different animation for attack and move, but actually it won’t care what animation it needs to set, it only cares whether it needs to change the current animation. Other parts of your code can utilize particle effects. But again it won’t care why it needs to emit these particles.


The components are usually totally independent. You can add and remove them during runtime to any entity. This results in a super flexible design. It’s easy to test different functionalities, because they are separated by design. Just for a quick example, you can add a bounding box around a game character, by adding a DebugComponent to it during runtime.

CPU friendliness

At each game loop, the engine goes through the systems once, one after another. A system processes it’s components in the same manner. The components are not stored in the entities. The same type of components are held in an array. So as the system starts processing them they are loaded into the CPU cache in the most efficient way possible which can greatly speed things up.

The game

The game I worked on is an isometric cocos2d-x game. It was originally a test given to me by a game dev company as part of the interview process. It was written originally in an MVC style. So the work I describe here started as a refactoring.

Choosing an Entity System

I used an Entity System framework before in a Java project. (It’s called Artemis. On how to integrate it with LibGDX for Android development you can check out one of my sample code here.)

I’ve decided to use a C++ port by Sidar Talei of the Artemis Entity Framework. But my only reason for this was that it’s interface was familiar to me. Both the Entity-X and the anax frameworks worth to take a look at them. Maybe I’ll make a comparison between them in a later blog post. I’ve cloned the bitbucket repo and added some important bug fixes to it. The final artemis-cpp framework I used can be downloaded from here.


I have to make a confession first. I think cocos2d-x at it’s current state is not a very great game engine. In fact it could be an example how to not write C++ code. While I love both C++11 and Objective-C, I also think that forcing iOS patterns on C++ leads to nowhere good. Seeing the choices they made in the engine (like it’s asset management) with an Android engineer’s eyes is even more excruciating. :| So I naturally started by adding the missing std::nothrow‘s parameter to the new operators in the create() methods, just to soothe myself. ;)

Input handling

I found input handling in cocos2d-x very weird. One would assume that as there is a scene graph in place, the input handling would be tied closely to it. Well, it’s not. The input handling is totally independent of the CCNode graph. This was very counterintuitive for me and I spent a great amount of time figuring out how to do it right in the game.

As a side note, in LibGDX you don’t have a built-in scene graph, but you have InputMultiplexers and built-in GestureDetectors. Using these, it was easy to set up separate GestureListenerSystems for the HUD, the Map and the Units. You can add these Systems to the world like this. I find this code more beautiful because of it’s symmetry.

InputMultiplexer multiplexer = new InputMultiplexer();
multiplexer.addProcessor(new GestureDetector(hudInputSystem));
multiplexer.addProcessor(new GestureDetector(objectInputSystem));
multiplexer.addProcessor(new GestureDetector(mapInputSystem));


On the down side every call goes through the hud, as it is the topmost input receiver. It checks whether a touch is relevant to it and either handles and swallows it, or passes it forward to the next GestureListenerSystem. A sample code for this can be found here.

But let’s get back to cocos2d-x. Based on the cocos2d documentation, you have two choices for input handling. Standard or Targeted touch delegates. You can’t use both on the same CCNode, but you can mix them in separate parts of the app. As the map needed multi-touch support for zooming, but I also wanted the hud to swallow the relevant touches I ended up using both of them.

You have to remember that for this to work properly, you also have to set the CCTouchDelegate‘s priority or you could find yourself in a situation where pressing a button on the hud triggers an unseen object below it. By the way this priority is an int and apart from the fact that to get higher priority you have to go lower:), these values are even global! I think this is a great weakness in the cocos2d API. They could instead take full advantage of the Node graph here and propagate the touches up on that chain starting from the leaf nodes. That would be more intuitive and as a side-effect it would also get rid of the globalness of the priorities.


My goal was to create a TouchInputComponent which will represent that this object is touchable. It would also encapsulate the input handling code. After a bit work I was able to achieve this, well..almost.


class TouchInputComponent : public artemis::Component, public cocos2d::CCTouchDelegate, public cocos2d::CCObject
	TouchInputComponent(artemis::Entity& entity, cocos2d::CCNode* node);

	static MapTouchListener* touchListener;
	cocos2d::CCNode* node;
	artemis::Entity& entity;
	bool ccTouchBegan(cocos2d::CCTouch *touch, cocos2d::CCEvent *event) override;
	void ccTouchMoved(cocos2d::CCTouch *touch, cocos2d::CCEvent *event) override;
	void ccTouchEnded(cocos2d::CCTouch *touch, cocos2d::CCEvent *event) override;
	void start();
	void stop();
	bool _touched{false};
	bool _delegateAdded{false};
	std::chrono::high_resolution_clock::time_point _tp;

I have a MapTouchListener class that receives the processed touches. It can get onEntityTouched and onEntityLongTouched events. In other parts of the code it can get the same events for map touches.
The component stores a node for which it belongs to according to the eye. (This is the root node of the ViewComponent.) It also stores the Entity which it belongs to. (In this implementation the components don’t know about their entities.)


TouchInputComponent::TouchInputComponent(artemis::Entity& entity, cocos2d::CCNode* node) : entity(entity), node(node)
bool TouchInputComponent::ccTouchBegan(cocos2d::CCTouch *touch, cocos2d::CCEvent *event)
	CCPoint p = node->convertTouchToNodeSpace(touch);
	CCRect rect(0.0f, 0.0f, node->getContentSize().width, node->getContentSize().height);
	_touched = rect.containsPoint(p);
	if (_touched) _tp = std::chrono::high_resolution_clock::now();
	return _touched;
void TouchInputComponent::ccTouchMoved(cocos2d::CCTouch *touch, cocos2d::CCEvent *event)
	if(_touched && !touch->getDelta().equals(CCPointZero))
		_touched = false;
void TouchInputComponent::ccTouchEnded(cocos2d::CCTouch *touch, cocos2d::CCEvent *event)
		if (std::chrono::duration_cast(std::chrono::high_resolution_clock::now() - _tp).count() >= 500) {
			TouchInputComponent::touchListener->onEntityLongTouched(entity, node->getZOrder());
		} else {
			TouchInputComponent::touchListener->onEntityTouched(entity, node->getZOrder());
		_touched = false;
void TouchInputComponent::start()
	if (_delegateAdded) return;
	CCDirector::sharedDirector()->getTouchDispatcher()->addTargetedDelegate(this, static_cast(TouchPriority::Entity), false);
	_delegateAdded = true;
void TouchInputComponent::stop()
	if (_delegateAdded) {
		_delegateAdded = false;

And here you can see why I failed encapsulating the input handling in the component. Sadly swallowing the touch events was not an option for me. The problem is that you have to decide whether you swallows a touch or not at ccTouchBegan() and you can't change your mind later. Which would happen a lot in case your gesture gets invalidated. But not swallowing a relevant touch results in that you need to have a global code somewhere where every kind of input is propagated and checks for which input is valid. Which means a few "bool ignoreNextTouch" kind of code.

In my case the global input handler class was the InputSystem. This system doesn't process any components directly. (Which is totally valid.) It gets it's input asynchronously and stores the last of them. This is not a restriction as it processes the input in every game loop if it needs to.
Using the z order it figures out which entity was most likely intended to be touched and propagates this event to the other systems through their components.


 * InputSystem
 * The system is responsible handling input (touches) from the player. It does NOT handle the HUD buttons and the pan and zoom gestures on the map.
 * The System is an "empty" System in the sense that it does not operate directly on components.
 * It handles only one touch event in a loop and discards others. This doesn't seem to be a big limitation.
class InputSystem : public artemis::EntitySystem , public MapTouchListener {
	void onEntityTouched(artemis::Entity&, int zOrder) override;
	void onEntityLongTouched(artemis::Entity&, int zOrder) override;
	void onMapTouchedAtTile(const MapTile&) override;
	void onMapLongTouchedAtTile(const MapTile&) override;
	virtual void begin() override;
	virtual void processEntities(artemis::ImmutableBag<artemis::Entity*>&) override;
	virtual void end() override;
	virtual bool checkProcessing() override;
	artemis::Entity* _touchedEntity{nullptr};
	MapTile _touchedTile;
	std::set<artemis::Entity*> _selectedEntities;
	int _lastZOrder{-10000};
	bool _entityTouched{false};
	bool _mapTouched{false};
	bool _longTouched{false};
	bool _checkProcessing{false};
	bool _ignoreNextMapTouch{false};
	inline void clearFlags();
	void collectSelectionsAroundTile(const MapTile&);
	void selectEntity(artemis::Entity*);
	void unSelectEntity(artemis::Entity*);
	std::set<artemis::Entity*>& getSelectedEntities();
	void attackEntity(artemis::Entity*);
	void moveUnitsTo(const MapTile&, const std::set&);

void InputSystem::clearFlags()
	_checkProcessing = false;
	_entityTouched = false;
	_longTouched = false;
	_mapTouched = false;

The interesting parts of InputSystem.cpp follows.


void InputSystem::begin()
void InputSystem::processEntities(ImmutableBag& bag)
	using std::string;

	if (_longTouched) {
		if (_entityTouched) {
			//_touchedTile = ...;
	if (_entityTouched) {
		GroupManager* gMan = world->getGroupManager();
		if (gMan->isInGroup(string(EntityGroup::ENEMY), *_touchedEntity)) {
		} else if (gMan->isInGroup(string(EntityGroup::ALLY), *_touchedEntity)) {
		} //no else branch!
	if (_mapTouched) {
		moveUnitsTo(_touchedTile, _selectedEntities);
void InputSystem::end()
	_lastZOrder = -100000;


It is possible to separate the input handling from the rest of the code. One important thing which I forgot to mention earlier is that I was able to eliminate the need to inherit from CCNode (or it's cousins) too, which was not apparent at the beginning. The most important step to get rid of the view classes was to separate the input handling from them.
I couldn't swallow the touches because it interfered with gesture detection. The only part where it worked was the HUD buttons. Which is a victory in itself. :) Just make sure that you set the touch priority on the HUD very low (to make it very high). :)

I'll introduce a greater part of the code in the following blog posts.