Mind map for TOGAF v9.1

Few months back, I appeared for TOGAF v9 exam and successfully cleared it. Although it was tough by itself, but my experience with earlier projects helped me to understand its concept. For those who haven’t been fortunate, I would suggest to look for and take help from mentor. This really helps. Many of us, aligned it with Solution Architect role or even try to align it with technical experience, although, to some extent, it is true, but as a whole, these are just the piece of big puzzle. 
In short, TOGAF is about applying its ADM development method on a given state of requirement leading it to next stage, and at the same time – build enterprise’s capability, and maturity. I believe this can be applied on any business requirement or problem, where it involves the progression from one (current) state to another (advanced/more mature) state.
Although during my exam study, I did took help from different sources including self-study kit from The Open Group. Some of noted I created my own during the course. Mindmap, is one of the note I keep referring, as and when I need some help for a quick glance or help.
It was created using FreeMind, and I would like to share it for an aspiration, inspirational and reference study.
Hope it helps you.

Download from here: http://1drv.ms/1WvYO4N

Cloud for Researches

Cloud has attracted alot of attention be it – organisations migrating their legacy solutions on cloud or for new development to serve their different devices/platforms or for research their new ideas. It is more than a buzz word. Cloud is now more than its basic avatar – computing, networking, or storage. The demands from business has grown by leaps and to meet this pace, cloud platform is evolving and stretching its arms to embrace more than basic necessities.

Despite of cloud’s popularity, it has been unclear whether it will be able serve the research field. Research is always expensive. It requires quality expertise and high experience and patience to examine different factors under controlled environment. Objective based research are heavily funded. Heavy historical datasets are often parsed to perform heavy processing requiring enough computing power to help researchers to manipulate the data and explore it with combinations. To wrangle and dismantle data, expensive infrastructures are required running continuously, are often replaced with compromised or cheaper resources to save cost and money.

Cloud computing provide and hold promise of availability and scaling. Vendors like Microsoft Azure and its design, should help researchers to analyse and delegate mundane tasks to cloud. The applicability of – industry-wide, proprietary-based, and scientifically-proven – intensive algorithms can be executed on map-reduce and achieve high prediction using machine-learning.

Heavy lifting and shifting of data, integration with different data sources are some of the tasks, which are easily plugged within cloud using either out-of-box or custom adapters. It is necessary now for researchers, they should start thinking cloud as a platform and exploit its benefits to buy leased-based-resources. This will help them to focus more on their researches rather than worrying on high computing systems and servers.

Microsoft, and other cloud providers, continue to invest heavily to achieve high security and become mature with every cloud-upgrades. Adapting to the industry level mandatory compliance, will not only make sure they are concern about data protection and algorithms, but also will build trust.

#Arduino #IoT Project 5 | Detect motion using PIR sensor and blink LED

Project – Detect motion using PIR sensor and blink LED

Ouput – http://youtu.be/Qkiwpjyv0BU, http://youtu.be/CtG75k7dzt8


int ledPin = 13;
int inputPin = 2;
int pirState = LOW;
int val = 0;

void setup() {
pinMode(ledPin, OUTPUT);
pinMode(inputPin, INPUT);


void loop(){
val = digitalRead(inputPin); // read input value
if (val == HIGH) { // check if the input is HIGH
digitalWrite(ledPin, HIGH); // turn LED ON
if (pirState == LOW) {
Serial.println(“Motion detected!”);

// We only want to print on the output change, not state
pirState = HIGH;
} else {
digitalWrite(ledPin, LOW); // turn LED OFF
if (pirState == HIGH){
Serial.println(“Motion ended!”);
pirState = LOW;

Data Scientist | Relational Algebraic significance in Data Science

We have been using relational database for decades now. The normalisation of database is the key. It resides on 3 base components – Structures, Constraints and Operations. Although, this is one of the format source of data, which data scientist will encounter, it will also be necessary for data scientist to store values. Could be used to store processed data for data visualisation or perform further manipulations.

As a data scientist, it is necessary to understand and underpin that programs that manipulate tabular data exhibit an algebraic structure allowing reasoning and manipulation. There is algebraic of tables which performs operations on tables like – Select, Update, Insert and Delete along with other projection through columns using join with other tables. This may include aggregate, union, difference, cross product and many others.

Hence, it is necessary to understand relational algebra with relational database, such that, it will help to a data scientist, to do optimizations and legitimate the dealings with data while performing various data operations. As an example, using the algebra laws of arithmetic operations – division, multiplication, addition and subtraction, it will reduce an overhead to execute query and deliver in quality response time. Often data scientist neglect the simple laws, which may impact the result sets. Like, in this equation, p = ((z*5) + (z*8) + a) / b, the variables denoted alphabetically a, z, and b will be replaced with value during evaluation. If a is always 0 and b value is 1, there will be additional algebraic operations being evaluated. Remember we are talking on large gigs of datasets, not fewer mbs.

As a data scientist, you should always think about numbers instead of tables and columns. In earlier expression, if you evaluate if z is 2. So, adding 0 to any number or dividing any number by 1, will not make any difference to the output. Hence, it is necessary to break the expression, validate and re-write the expression which may perform less evaluations and cycles. This is symbolic reasoning, and it is necessary to understand, computers follows the arithmetic instruction on a given expression. It will not suggest or perform any kind of symbolic reasoning. Hence, it is quite expensive when objects that you will manipulate is not mere integers, rather you may be dealing with terabytes sized tables, than this kind of symbolic reasoning will not work.

So, it is question of cost based optimization, as it is unnecessary that short version expression works all the time. Relational database engines parse SQL query into relational algebraic before execution, it is associate query execution cost depending upon volume and quality of query in where/group by/having clauses. It is always to perform various expressions on sampling data and choose the one with the lowest cost. Putting it note that performing query on a table(s), will always return a table, call it as result-set or dataset or dataview.

Hence, you must also consider logical expressions of associative (right, left or cross). Furthermore, these are often combined with logical operations like AND, or OR and negating conditions like NOT.

Distinguish | #datascientist vs #visualisation vs #businessintelligence #bi vs #machinelearning #ml vs #stastistics vs #dba

Distinguish with Skills Vs. Data Scientist Gap
Business Intelligence Business Intelligence is particular approach to a particular problem. BI engineer is not expected to consume their own data products, and perform their analysis, and make the business decisions themselves. Usually they build tools for others to make decisions with. Data Scientists does both. Learn how to do statistical modelling and communicate results with business group and decision makers.
Statistics Statistics are at heart of what a data scientist does, day-to-day. They are comfortable for any data they encounter with will be available at single place as the most information possible from a very sparse, expensive to collect and, work on a small data set. Data Scientist is a new engineering to work on massive datasets and perform their analysis on disparate data rather than do a sampling, or on assumption. However, model and methods remain same. Learn to deal with data that does not fit from a single source.
DBA Database programmers bring a lot of skills and expertise. However, looking the diverse data from different sources like graph nodes, and vectors, there is less chance of incoming relational data, which may or may not be right tool and even the concepts that transcend in particular system. Data scientists work on disparate and diversified datasets including non-relational and different formats other than text – audio, video, binary format, helping them to perform deep analytical and bring insights. Learn to deal with the unstructured data.
Visualization Like Statisticians, they are concerned with limited data to visualise from single source rather than multiples. Data Scientists conceptualize how the final data should be interactive which help business decision makers to decide based on available historical data. Learn about algorithms and tradeoffs at scale.
Machine Learning Machine Learners helps bring reality to the concept of performing predictions on the data. However, it is more like engineering process where technique, and applying it, and running it is a fairly small fraction. Data Scientists prepare model, methods, and techniques, how machine learning will help in their data application. There is a lot of work involved before machine learning stage – data manipulation, data wrangling, cleaning, massaging, data jujitsu, data mugging. Learn to do statistical analysis, data wrangling from multiple sources before performing machine learning process.

#gamification | Choosing the right cloud service offerings from #Azure for #gamearchitecture #gaming #architecture

Today era is about cloud and analysing information. Cloud is everywhere. Be it business apps or mobile apps. So what it’s role in gaming world? What would be its significance? Front end? Back End? Hosting? Which services to exploit and for what?

Cloud offers a lot – Virtual machines (VMs), Storage, Scaling, ready environments, industry standard compatible server images, efficiency – all at reduced cost. One of the huge benefit is Scaling. We all know incoming requests increase by leaps and bounds during peak period as game may become popular, due to your marketing team strategy. During this period, it is likely that server usage spikes up and touches sky, looking like sky-scrappers, and which limits your memory and fail to process new requests or may slow down server processing, impacting response time. Thus, deteriorate user patience and loss of business. What if it has to be scaled down, when users are less? Adding/removing server node to cluster is also an additional overhead for Operations team. Procuring server with required prerequisites, OS license, set up and configuring within the cluster. Hence, it has to be well planned and an architect should consider all this overhead and do capacity planning well ahead of its time. As an architect, you cannot predict it point-to-point. You may lay down capacity planning, however, it again depending on business suggested user numbers. So, it is your responsibility to utilise the cloud services and it should be among the solution options.

Next question, which cloud features should be utilised and for what purpose? Web Services, and/or Web Apps, if your game is made available on browser platforms including apps on different devices.

Virtual Machines
VMs are plain vanilla systems, they are like plain blank-black-slate. You can install any OS of your choice or based on game-engine platform is intended to build upon. VM gives you complete control with the bare OS and some management tools that you can manage and its resources like – network, bandwidth, how this should be used and operate – all through the console. Microsoft Azure provides one such console through its portal.

Cloud Services
Cloud services are built and hosted on top of VMs. Services can be back-end processes require to compute using complex algorithm or it could be as small as web application which just upload images on storage server. These are services which are made available to users on demand. It is designed to provide easy, scalable access to applications, resources and services, and are fully managed by cloud service provider like Microsoft Azure. It is similar to daemon and windows services in Linux and Windows, respectively, but attached with fault-tolrent and scalable qualities. They are more focused on application deployment rather than focusing on other fluffs, which involves supporting this functionality. To develop a Web App, as-in online game, it be can built using ASP.Net, Python, PHP, Node.js using back-end persistence like SQL Server, MySQL databases, NoSQL – mongoDB.

Cloud services also offers storage services. It is more often comes in different flavours and types. It includes databases – both paid and free, both SQL and NoSQL, caching – for temporary storage, files storage – to store physical files, blob storage – to store binary files as a storage. Microsoft Azure offers everything in this category from persistence to temporary storage, from SQL to NoSQL. Named storage such as SQL database, SQL service on VM, MySQL, MongoDB, DocumentDb. This allows you to store right from gamers or players profile to player’s game state, from game data to game images and files on file storage. As an architect, it is necessary for you to define and evaluate which storage will suits for which data type you want to store.

You need to do a capacity planning, determine auto scaling options for databases. It is also necessary what will be it’s location – as this is crucial since it will impact on performance, an important quality attribute. Although, Azure gives you plenty of options in deciding server’s geolocation, it is up to architect to select appropriate based on players location.

If it is MMO, this is common use case that player should communicate with each other. To support your extreme large amount of data generated, you can go for DocumnetDb, a No-SQL database, as temporary storing service.  It stores in JSON format and can read/write using LinQ, eliminating joining, adjoining and how other relational data will look like. Azure supports DocumentDb and its starting size is 10gb.

In MMO, it is necessary to architect tracking system which will consume and send the information back to down to other players or their opponents in real-time. This could be temporary information kept on the storage, however, what is important is the size and state of data. Azure offer Redis caching and this type of service is best suit for this use case. You can also support this using Pub/Sub model.

File Storage and Media Services, from Azure offerings, can be a handy option to store small chunk of video files and later can be monitored. Possible use case is when it is required to record a video on player behaviour and thought process in order to support game’s log or to judge player’s behaviour during particular level. This will further can be shared with developers to debug or with analysts to improve game level and difficulty.

This is information era. Business analysts may want to capture more informations from game app to get the insights and how well it is performing, this could be done through Machine Learning process which will evaluate the massive amount of data collected to determine patterns and predict the outcomes. Azure offerings on Machine Learning is massive, giving more control to user to predict from historical data utilising statistics algos.

In a nutshell, Azure offers various options among cloud services, help you, as an architect, to choose and to support game’s implementation and helping operations to create resources on Azure, which is as simple as few button clicks!

#Gamification | Choosing correct network protocol for communication #gamearchitecture

TCP keeps a connection to the server, keep packets in order and acknowledges when you’ve sent it data. UDP can be out of order, does have lower overhead, consider as connectionless. Fire and Forget. Each one headers are slightly different TCP keeps the acknowledgement number, sequence number, source port, destination port, header length, reserved, flags, Window, checksum, urgent pointer, and options. This helps to retrieve the data, later, whereas, UDP’s header consist of source port, destination port, length, checksum and data. Hence, TCP is heftier when compared to UDP, in order to get reliability. Thus, there is less room left for data. On the other hand, UDP has more room of data to send as header data is less. Thus, faster throughput when compared with TCP.

It is very popular commuication protocol as many players play games over internet. Developed in Feb 2010, it was initially supported by Chrome and Safari, now used by all major browsers. This overcomes challenges of earlier protocols and helped to maintain 2 way communication. Since it build on HTTP, it uses same port 80, which is fairly advantageous not being blocked by firewalls. Similar to earlier protocol, it has header, but fairly minimal, there by, allowing much room for data. Data can be either text or binary. For speed purpose, binary data might be little faster, for testing and evaluation. WebSockets does support secure connections. There are different implementations for WebSockets API. Most of the framework have same kind of implementation – close, open, send, acknowledgement.

From game architecture standpoint, it is important to choose correct protocol keeping in mind about multiple factor that game will deliver – reliability, latency, data encryption, performance, multi-player.

References –

Little known about IPX-
IPX is another protocol, however, it is not in use. Developed by Novell Netware networking group in late-80/mid-90s. One of the popular game, StarCraft, used it but later patched it with TCP. It did work for players on home network, however, it vanished due to scalability challenges.

%d bloggers like this: