Awesome Game Talks

A list of awesome video lectures regarding Game Development and Tips on landing a job in the Gaming Industry.

Chet Faliszek from Valve’s single mantra to get a job as a developer in a gaming company is to “make something”.


PouchLAN - Experiment

Working on a new open source plugin for PouchDB.

Decentralized Master-Slave PouchDB ? Also at the same time Live Sync with a Remote DB?

Trying to create a Hub for PouchDB nodes over a Local Area Network(LAN) with the master actually replicating data to the remote server.

This saves a lot of HTTP Connections to Remote Server plus only the master has to be connected to the Internet. The rest nodes are just working on the local db from the master.

I am using node-discover, pouchdb, express-pouchdb


grunt-icons8

People using Icons8 Fonts and running on Angular or any Grunt compatible application can use

https://github.com/vasumahesh1/grunt-icons8


It’s a really simple tool that manages the post processing for Icons8 Font Files. It performs some basic tasks like renaming the files and adding prefix to the SCSS/CSS Files. It also calculates a relative path between your cssExport directory and the fontExport directory and adds it automatically to the Style files.

grunt.initConfig({
  icons8: {
    dev: {
      options: {
        prefix: 'my-app',
        cssExportPath: 'css/',
        fontExportPath: 'output/',
        fontFilename: 'myFont',
        scss: true
      },
      archivePath: 'test/app.zip'
    },
    prod: {
      options: {
        cssExportPath: 'css/',
        fontExportPath: 'output/',
        fontFilename: 'myFont',
        relativeFontPath: '../assets/fonts/typography'
      },
      archivePath: 'test/app.zip'
    }
  },
});

Importing ScaledJS to Cocos2D JS

I just started making my game using cocos 2d js. It’s really easy to import ScaledJS to Cocos2D JS.

Install

Install ScaledJS, typically to a place where you will put your other libraries. (Config your BowerRC before executing this command)

bower install scaledjs --save

Include file in project.json

Include this line in your jsList in project.json

"jsList" : [
...,
...,
"src/lib/scaledjs/build/scaled.min.js",
...
]

Check ScaledJS Import

Check using CCLog

cc.log(ScaledGen);

Well that’s it! You can now start making terrain that suits your needs


AxiCLI just got smarter

New update for AxiCLI includes numerous backend updates plus more SSH controls.

Overide User for SSH

Useful in cases when you have lots of users like for eg when dealing with hadoop clusters you have (hive, hdfs, spark, <main_user_account>) for switching.

ssh-<server_name> --user hduser

Overide Options for SSH

Useful in cases when you need to provide identity keys or tunnel properties

ssh-<server_name> --options "-ND 8157"
ssh-<server_name> --options "-i keyfile.pem"

AWS Redshift Slow - For Real Time Inserts

While searching for various Data Warehouses, the analytical DB warehouse performs really poorly when it comes to actually Inserting values through a simple INSERT INTO.

As per Amazon the fastest way for Inserting to Redshift is through Dumping the Data into Amazon S3 Storage Servers and the using the COPY Command to transfer the Data. It is very fast (almost sub-second when doing this).

For regular Inserts Redshift seems to throttle my concurrent Inserts be it a Staging Copy of the Live Table. I had set up a concurrency of 5 inserts per second and got large delays in my Worker Queue.

Below are the Logs from my worker Queue benchmarking about 5 concurrent workers doing a single insert each into Redshift

09:53:17.045Z  INFO Titan-Runner: Push Completed in - 1.553s
09:53:17.045Z  INFO Titan-Runner: Push Completed for Document
09:53:17.457Z  INFO Titan-Runner: Push Started for Document
09:53:18.479Z  INFO Titan-Runner: Push Completed in - 2.91s
09:53:18.488Z  INFO Titan-Runner: Push Completed for Document
09:53:18.757Z  INFO Titan-Runner: Push Started for Document
09:53:19.786Z  INFO Titan-Runner: Push Completed in - 4.152s
09:53:19.787Z  INFO Titan-Runner: Push Completed for Document
09:53:19.896Z  INFO Titan-Runner: Push Started for Document
09:53:21.124Z  INFO Titan-Runner: Push Completed in - 5.693s
09:53:21.124Z  INFO Titan-Runner: Push Completed for Document
09:53:21.208Z  INFO Titan-Runner: Push Started for Document
09:53:22.424Z  INFO Titan-Runner: Push Completed in - 6.525s
09:53:22.424Z  INFO Titan-Runner: Push Completed for Document
09:53:22.476Z  INFO Titan-Runner: Push Started for Document
09:53:23.751Z  INFO Titan-Runner: Push Completed in - 7.427s
09:53:23.751Z  INFO Titan-Runner: Push Completed for Document
09:53:23.852Z  INFO Titan-Runner: Push Started for Document
09:53:25.064Z  INFO Titan-Runner: Push Completed in - 9.264s
09:53:25.064Z  INFO Titan-Runner: Push Completed for Document
09:53:25.170Z  INFO Titan-Runner: Push Started for Document
09:53:26.417Z  INFO Titan-Runner: Push Completed in - 10.039s
09:53:26.418Z  INFO Titan-Runner: Push Completed for Document
09:53:26.468Z  INFO Titan-Runner: Push Started for Document
09:53:27.757Z  INFO Titan-Runner: Push Completed in - 11.322s
09:53:27.757Z  INFO Titan-Runner: Push Completed for Document
09:53:28.039Z  INFO Titan-Runner: Push Started for Document
09:53:29.071Z  INFO Titan-Runner: Push Completed in - 12.692s
09:53:29.071Z  INFO Titan-Runner: Push Completed for Document
09:53:29.219Z  INFO Titan-Runner: Push Started for Document
09:53:30.353Z  INFO Titan-Runner: Push Completed in - 12.896s
09:53:30.353Z  INFO Titan-Runner: Push Completed for Document
09:53:30.407Z  INFO Titan-Runner: Push Started for Document
09:53:31.674Z  INFO Titan-Runner: Push Completed in - 12.917s
09:53:31.675Z  INFO Titan-Runner: Push Completed for Document
09:53:31.803Z  INFO Titan-Runner: Push Started for Document
09:53:33.013Z  INFO Titan-Runner: Push Completed in - 13.117s
09:53:33.014Z  INFO Titan-Runner: Push Completed for Document
09:53:33.067Z  INFO Titan-Runner: Push Started for Document
09:53:34.396Z  INFO Titan-Runner: Push Completed in - 13.188s
09:53:34.397Z  INFO Titan-Runner: Push Completed for Document
09:53:34.499Z  INFO Titan-Runner: Push Started for Document
09:53:35.747Z  INFO Titan-Runner: Push Completed in - 13.271s
09:53:35.747Z  INFO Titan-Runner: Push Completed for Document
09:53:36.083Z  INFO Titan-Runner: Push Started for Document
09:53:37.065Z  INFO Titan-Runner: Push Completed in - 13.213s
09:53:37.065Z  INFO Titan-Runner: Push Completed for Document
09:53:37.520Z  INFO Titan-Runner: Push Started for Document
09:53:38.425Z  INFO Titan-Runner: Push Completed in - 13.255s
09:53:38.425Z  INFO Titan-Runner: Push Completed for Document

The time starts increasing sequentially which makes it impossible to use Redshift for Real Time inserts.


Using Avro for Node - Hadoop/Hive

Transferring Real Time data from our servers to Hive became a huge issue. Since we had JSON and using JSON Serializers & Deserializers was a computationally intensive task. We had problems finding JSON -> ORC straight conversion primarily because the JSON was schemaless and ORC on the other hand was linked to a Schema of a Hive Table.

Then came node-avro-io. Using this we can impose a schema to our JSON and the library generates an AVRO file which can be loaded into a HIVE Schema. Sounds Great? The reason why it’s so complicated is because 

  • HIVE doesn’t support inserting complex stuctures like arrays and maps using direct HQL (Hive Query Language)
  • JSON SerDe (Serializer DeSerializer) is computationally expensive as it has to Deserialize the row before computing a hive job/query.

This is where avro plays a huge role in providing a schema for my JS Object and Imposing it. Also avro supports structures, named structures, maps as well as arrays!


Let’s create a simple table simple_users(name, marks, address) and we define a common schema between NodeJS(JS Objects) and HIVE Tables stored in HDFS. Given below is the schema of our table:

{
"type": "record",
"name": "userRecord",
"namespace": "com.user.record",
    "fields": [{
		"name": "name",
		"type": "string"
	}, {
		"name": "marks",
		"type": {
			"type": "array",
			"items": "int"
		}
	}, {
		"name": "address",
		"type": {
			"type": "record",
			"name": "address_struct",
			"fields": [{
				"name": "zipcode",
				"type": "int"
			}, {
				"name": "city",
				"type": "string"
			}]
		}
	}]
}

Now, we will use the exact same schema in our Hive Table create query.

CREATE TABLE testdw.simple_users
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
TBLPROPERTIES (
'avro.schema.literal'='{
"type": "record",
"name": "userRecord",
"namespace": "com.user.record",
    "fields": [{
		"name": "name",
		"type": "string"
	}, {
		"name": "marks",
		"type": {
			"type": "array",
			"items": "int"
		}
	}, {
		"name": "address",
		"type": {
			"type": "record",
			"name": "address_struct",
			"fields": [{
				"name": "zipcode",
				"type": "int"
			}, {
				"name": "city",
				"type": "string"
			}]
		}
	}]
}');

Now once your complex Avro Schema based Table is Ready. We can use the same JSON into our NodeJS code and Impose this common schema.

var avro = require('node-avro-io').DataFile.AvroFile();
var schema = {
        "name": "data",
        "type": "record",
        "fields": [{
                "name": "name",
                "type": "string"
        }, {
                "name": "marks",
                "type": {
                        "type": "array",
                        "items": "int"
                }
        }, {
                "name": "address",
                "type": {
                        "type": "record",
                        "name": "address_struct",
                        "fields": [{
                                "name": "zipcode",
                                "type": "int"
                        }, {
                                "name": "city",
                                "type": "string"
                        }]
                }
        }]
};
var writer = avro.open("test-output.avro", schema, { flags: 'w', codec: 'deflate' });
writer.end({ name: "Vasu", marks: [1,2,3,4], address: {city: "Delhi", zipcode: 112233} });

Now we can transfer test-output.avro to HDFS and upload it (use node-webhdfs). Then use the below HQL to LOAD Avro File into the Data Warehouse

LOAD DATA LOCAL INPATH '/home/hduser/test-output.avro'
OVERWRITE INTO TABLE testdw.simple_users;

After doing tons of researching this may be an optimal solution to get complex data structures into your data warehouse. To summarize the entire transactional process occurs between Node and Hadoop:

  1. Generate your JS Object
  2. Impose Your Schema & Generate Avro File
  3. Upload Avro File to HDFS
  4. make JDBC Connection to Hive
  5. Create Temporary Table, Load the file
  6. Transfer contents of Temporary table into a more optimized ORCFILE based Hive table
  7. Close JDBC Connection

Unfortunately this must be done as long as Hive INSERT INTO doesn’t support complex structures.


Easy Copy To & From Servers

You can copy files to and from directly from servers as long as you remember the name!

copy-from-<server_name> <server_file_path> <local_path>
copy-to-<server_name> <local_file_path> <server_path>

Copy relative to your SSH Username (/home/<ssh_username>/) or even directly from root level directories!

Copying Files From Servers

copy-from-dev myproject/app.js /p/
copy-from-dev /var/log/couchdb/couchdb.log /p/

Copying Files To Servers

copy-to-dev /p/couchdb.log myproject/logs/
copy-to-dev /p/couchdb.log /var/etc/couchdb/couchdb_log_modified.log

Copying Files As Root

copy-from-root-<server_name> <server_file_path> <local_path>
copy-to-root-<server_name> <local_file_path> <server_path>

Copy relative to your SSH Username or even directly from root level directories!


Register SSH Keys in your Servers

You can register your SSH Keys in the new update of AxiCLI 0.1.3 using:

axicli register <server_name>


Minecraft - Advanced

Complex Pipe Systems