ndc magazine: the developer

Upload: denisejacobs

Post on 25-Feb-2018

225 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/25/2019 NDC Magazine: The Developer

    1/68

    DONT IGNOREAZURE STORAGE

    SERVICESMARK RENDLE

    CREATE YOURVERY OWNIOT DEVICESSUZ HINTON

    SENSORS,THE INTERNETAND YOUREVERYDAYCOMMUTEKRISTOFFER

    DYRKORN

    STATUSFIELDS ONENTITIES HARMFUL?

    UDI DAHAN

    www.ndcmagazine.com

    NDCLONDONConference, 3.-5. Dec

    Pre-workshops, 1.-2. Dec

    NDC

    LOND

    ONISSU

    E

    Creativity and Innovation:

    CRITICAL TOOLS FOR

    SUSTAINED SUCCESSDENISE JACOBS

    Creativity and Innovation:

    CRITICAL TOOLS FOR

    SUSTAINED SUCCESSDENISE JACOBS

    NO3/2014

    MAGAZINE FOR SOFTWARE

    DEVELOPERS AND LEADERS

    THE DEVELOPER

  • 7/25/2019 NDC Magazine: The Developer

    2/68

    I think its a fantastic time to be working in the softwareindustry. We now truly have multiple platforms anddelivery mechanisms from cross platform desktop apps,highly functional web sites, mobile applications,wearables, and devices with no UI coming with the Internetof Things.For developers we have functional languages, great clientside frameworks, cross platform mobile tools and theserver side ecosystem is being enriched with open sourceand Microsoft rebooting with ASP.NET vNext supportingcross platform execution for more effective clouddeployment.Now that (some) developers have finally admitted theirfallibility we have placed testing into the developmentprocess as a first class citizen. Now, we need to convertthat continuous integration into continuous delivery anddeployment too.

    In addition its time to see whether the original promise ofAgile and XP is actually being fulfilled, and how we can

    improve the process ofcreating software. As ourl i v e s b e c o m e m o r eentwined with devicesand software, we bear agreat responsibility ofgetting things right anddelivering software thatis usable, flexible andreliable.With this abundance of topics, creating a conferenceagenda that is fully inclusive is a formidable task, but Ithink weve managed to make NDC London provide afantastic snapshot of where software development isheading in 2015.

    LIAM WESTLEYApplication Architectat Huddle

    THE DEVELOPER 3-2014NDC Magazine for software developers and leaders

    and meet your target group!

    For more information about advertisement, please contact Charlotte Lyng at+47 93 41 03 57 or [email protected]

    Advertise in the NDC MagazineF OR S OF TW A RE DEV ELOPERSAND L EADER SM A G A Z I N E

    CREATING YOUR OWN

    SKUNK WORKSNIALL MERRIGAN

    FLUENTAPIS

    KEVIN DOCKX

    SPECIALNDCISSUE

    ACCELERATINGWINDOWS

    STOREAPPDEVELOPMENT

    DAVIDBRITCH

    Ad d newdimensions

    toyouralgorithms

    using

    F#TYPEINFERENCE

    VAGIF ABILOV

    5AUDIOTRICKSyou didn't knowbrowsers coulddo

    JORYPRUM

    NO2

    /2014

    NORWEGIANDEVELOPERSCONFERENCEOsloSpektrum,4-6JunePre-workshops,2-3June

    NDC-Magazine_2-2014-NEW.indd 1 09.05.14 11:09

    From Crisis to Success

    Publisher: Norwegian Developers Conference AS. By Programutvikling AS. Organisation no.: 996 162 060

    Address: Martin Linges vei 17-25, 1367 Snarya, Norway. Phone: +47 67 10 65 65. E-mail: [email protected]

  • 7/25/2019 NDC Magazine: The Developer

    3/683

    Member of Den Norske Fagpresses Forening

    Design: Ole H. StrksenUncredited images are from

    Shutterstock, except portraits.

    Editor:

    Kjersti Sandberg

    Marketing Manager:

    Charlotte Lyng

    Contents

    ARTICLESSensors, the Internet and your everyday

    commute ................................................................................................................................ p. 4

    Using the Arduino platform tocreate your very

    own IoT devices .............................................................................................................. p. 8

    Status fields on entities HARMFUL?........................................ p. 12

    Dont Ignore Azure Storage Services ........................................... p. 20

    Unlock the value of your data with ElasticSearch....... p. 24

    Architecture for CD ............................................................................................. p. 30

    Creativity and Innovation: Critical tools for

    sustained success ................................................................................................ p. 34

    Thinking like an Erlanger Part 1 ..................................................... p. 38

    Create a custom agile process for your

    organization.................................................................................................................... p. 44

    Exceptional naming ............................................................................................. p. 48

    Course descriptions .......................................................................................... p. 50

    NDC LONDON 2014

    Tickets.................................................................................................................................p. 55

    The NDC Agenda Committee ..................................................................p. 56

    Workshops ........................................................................................................................p. 58

    London ..................................................................................................................................p. 60Entertainment and food ................................................................................p. 62

    Program Wednesday - Friday...................................................................p. 64

  • 7/25/2019 NDC Magazine: The Developer

    4/684

    Sensors,the Internet and your

    everyday commute

  • 7/25/2019 NDC Magazine: The Developer

    5/68

    I sometimes hear that Norwegians tend to do the samethings at the same time. We are a little bit like herd ani-mals since we go to work at around the same time, deliverkids to school or kindergarten at the same time and driveto our cottages on Friday afternoons and back on Sundayevenings. Since the roads are not built for our habitualbehaviour, we get stuck in traffic - especially in our moredensely populated areas.

    It must also be said that road construction and maintenanceis a challenging task in Norway. Due to the topography andthe harsh climate the costs associated with building andkeeping roads at high standards are substantial. In addition,

    planned or unplanned maintenance will often lead to con-gestion - and setting up detours is sometimes impossiblesince our road network is sparse. Still, proper maintenanceis vital for cost-effective operation.

    The new traffic measurement system will be beneficialfor both us and the road administrators. It will producemore detailed data about traffic patterns - data that willbe publicly available for us to download and build applica-tions upon. The same type of information will be used forcreating detailed maintenance plans or improving roadsafety. Traffic volumes and vehicle weights are two of the

    main factors that decide wear and tear on road surfaces.The climate, and especially seasonal phenomena like frostheaving, also contribute.

    The Norwegian Public Roads Adminis-

    tration is building a new infrastructurefor road traffic measurements. Besides

    classical project goals like improved

    data quality and lower maintenance

    costs, you and me will also benefit: the

    system will make it possible to pinpoint

    where and when congestion normally

    occurs in the rush hours and on specialholidays. So in the future, you might look

    at both weather and traffic forecasts

    when planning your Easter holidays in

    the mountains!

    5hobbit/Shutterstock

    By Kristoffer Dyrkorn

  • 7/25/2019 NDC Magazine: The Developer

    6/686

    The system consists of roadside sensors, embeddeddevices for vehicle registration and classification, a datagathering network and a server side application for analysis,

    storage and reporting.

    The information that is gathered should be harmless froma privacy point of view. The sensors utilise induction andpressure sensitivity. They are built into the road pavementand do not register any identifying information about thepassing vehicle, owner or driver. Only the fact that a vehi-cle has passed, and its speed and weight, is logged. Themodel, colour or registration number cannot be detectedby the sensors.

    A registration and classification device converts the mag-

    netic signature of a vehicle into a record containing a timestamp and the measured vehicle length, weight and speed.The record is then transmitted over a standard Internet con-nection using OPC-UA, a protocol stemming from the auto-mation and process control industry. The data is encodedin a compressed binary format while the protocol itselfguarantees reliability, security and integrity - at the sametime being platform independent and resource efficient.

    The record is received by a Java application that doessimple near-time analysis. As an example, it is importantto immediately detect and flag any vehicle that is drivingin the wrong direction on a motorway. Such a situation is

    extremely dangerous and nearby drivers should be informedthrough any channels available. The detection of these so-called "ghost drivers" is based on the registration of nega-tive speeds. On two-lane roads, however, a negative speedmight come from a car passing another car - which is a nor-mal event. The application thus contains logic to separatethese two situations from each other.

    The record is then stored in a database. Here NoSQL tech-nology provides the needed support for high-speed writes,large data volumes, robustness and quick and efficientreporting. The system stores all records explicitly to secure

    full flexibility in the report creation, both for current andhistorical data.

    Currently, the system is in its start-up phase. The softwarehas all the needed basic functionality for data-gathering and

    reporting. A large batch of roadside sensors is now beingacquired and will be installed along the main roads in Norwayin the next couple of months. After that, the system will be

    enhanced with new functionality and adaptations for largertraffic volumes as more and more sensors get installed.

    One of the main goals of the system is to improve road traf-fic statistics for better maintenance planning and invest-ment control. However, it is also a clear goal that the systemshould provide open and near real-time traffic data to thepublic and to public and private institutions. There are sev-eral use cases that illustrate the value of such data: policeor ambulance drivers in an emergency can get driving rec-ommendations that consider the current traffic situation.Ordinary drivers can be advised about congestions ahead

    and be offered alternative routes. Also, parcel service com-panies can provide more precise estimates on expectedarrival times to their customers.

    The potential value of this system thus spans efficient pub-lic spending, nice-to-have information for technology and/or driving nerds and time savings of potentially life-savingcharacter. We will need to wait some time to see how thesystem will be used and whether its full potential will bereached. One thing is clear: all of this is made possible justby using software, sensors, the Internet and hardware thatis connected!

    Kristoffer is a Scientist at BEKK. He hasbeen a developer, team lead or solutionarchitect the latest 15 years and has

    broad experience from a variety of fieldssuch as search engines, performancetuning, NoSQL for analytics and 3Dgraphics. At the moment he is mostlyworking on systems architectures fordistributed stream processing.

    hobbit/Shutterstock

  • 7/25/2019 NDC Magazine: The Developer

    7/68

  • 7/25/2019 NDC Magazine: The Developer

    8/688

    MODULAR JAVASCRIPTThis is just a quick project showing how simple it is to getup and running with building and programming your ownInternet of Things devices.

    When all assembled and placed in a mailbox, this contrap-tion will connect via WiFi to both email and SMS you whenyou get mail!

    In this example, I'm using an Arduino microprocessorteamed with a light sensor and a WiFi module. A battery

    powers the Arduino, which constantly keeps track of howmuch light is in the mailbox. When the postal delivery opensthe little back door to put your mail and parcels in, lightstreams into the mailbox. This trips the device, which willthen connect to WiFi and send the email and SMS to you.

    If you do not have one of those door opening mailboxes, youcan program it the opposite way - put it in your mailbox so

    that the sensor gathers light from the mail slot, and whenenvelopes are pushed through it blocks the light momentar-ily. Effectively that will be the trigger.

    As a software developer, there's something magic about being able to

    say, "I made this myself!". Building hardware to integrate with software

    is no different. Sure, you can buy off the shelf IoT devices for your home

    and lifestyle, but there's much more fun in building your own. You'll learn

    a lot about how they work, and will take away a sense of pride and

    achievement.

    By Suz Hinton

    Vudhikrai/Shutterstock

    Using the Arduino platform to

    create your very own

    IoT devices

  • 7/25/2019 NDC Magazine: The Developer

    9/689

    This only works in daylight, but it's pretty cool anyway. Yourmileage may vary with WiFi strength if your mailbox is madeof metal.

    The full code is hosted on Github:https://github.com/noopkat/tinyduino-wifi-helloworld

    **Warning** - postal companies might not be too pleasedabout spotting a mysterious device in a mailbox. They won'talways assume it's a perfectly harmless Internet of Thingsdevice. I am not condoning breaking the law with this pro-

    ject. Make sure you're familiar with mail/mailbox tamperinglaws in your country. Long story short - don't scare yourfriendly neighbourhood post deliverer. Nobody wants thepolice outside their house!

    SHOPPING LIST TinyCircuits Processor board TinyCircuits WiFi module TinyCurcuits Proto module (any will do) TinyCircuits USB programmer module

    (not pictured) Micro USB cable 10k Ohm resistor Photocell resistor 3.7v LiPo battery, at least 500mA

    Tools I assume you have already: LiPo charger Soldering iron Solder

    ASSEMBLYMy favourite feature of the TinyCircuits system (other thanthe tiny size) is the way they easily click together to supplythe technology you need. First, we're going to click every-thing together as we need it.

    Take the processor board, and click the USB module to thetop of it. Then, click the WiFi module to the top of the USBone. Lastly, click together the proto board to the very topof the stack. This will be the order of the stack when wedevelop it. It allows us to power the board without a bat-tery via the computer, and we can also program it this way.

    Next, remove the proto module from the top of the stack.Solder the resistor and photocell to it. The resistor goesfrom A0 to GND, and the photocell shares the A0, and the

    other end goes to VBATT. VBATT is a special pin that makesboth the USB and battery power sources available to use.

    See the photo on the next page for how it will look whencomplete. Note I have left the component arms rather longso that it's easier to see what's going on.

    UPLOAD ARDUINO CODEClick the proto module back onto your TinyCircuit stack.Plug a micro USB cable into the programming board, thenconnect to your computer. In the Arduino software, set the

    board to 'Arduino Pro Mini 3.3v w/ ATMega 328'. Open thesketch from the repo and click 'upload'. Select the correctUSB device if prompted.

    Parts needed

  • 7/25/2019 NDC Magazine: The Developer

    10/680

    Final assembly

    The main loop of the sketch is pretty simple, see thetruncated code below:

    uint32_t ip = cc3000.IP2U32(10,0,1,4);

    int port = 8081;

    String path = "/mailbox/token/";

    String token = "558822";

    String request = "GET " + path + token + " HTTP/1.0";

    void loop(void)

    {

    sensorValue = analogRead(sensorPin);

    // print to serial monitor for debugging purposes

    Serial.println(sensorValue);

    // if light source appears, set bright to true and send request

    // you may have the ddle with this value to suit your light source

    if (sensorValue < 450 && bright == false) {

    // this will ensure request is sent only once

    bright = true;

    // send request to mailbox server

    send_request(request);

    }

    // light source gone, go back to dark mode again

    if (sensorValue > 450 && bright == true) {

    bright = false;

    }

    delay(300);

    }

    You'll definitely need to play with the default light readingvalue set in the code above for best results with the physicalplace you're setting up your device in.

  • 7/25/2019 NDC Magazine: The Developer

    11/6811

    SET UP YOUR SERVERI created a really simple NodeJS hapi server instance for deal-ing with the email and SMS. It is running on the local networkthe device is connected to via WiFi. The server simply waits fora GET request to the specified route, then verifies the requestwith a static token the Arduino will send as part of the request.This token should help stop your friends pranking you!

    When the request is successful, the route handler will callsome third party API's to send the notifications. I'm usingTwilio for SMS, and Mailgun for the email.

    A truncated sample of the code:

    var Hapi = require('hapi');

    var server = new Hapi.Server('10.0.1.4', 8081);

    var passToken = '558822';

    server.route({

    method: 'GET',

    path: '/mailbox/token/{token?}', handler: function (request, reply) {

    if (request.params.token && request.params.token === passToken) {

    // Twilio

    sendSMS();

    // MailGun

    sendEmail();

    // simple reply for testing manually

    reply('you\'ve got mail!');

    } else {

    // someone's being a prankster if missing/wrong token

    reply('why you gotta troll me, friend :(');

    }

    }

    });

    // Start the server

    server.start();

    Start the server in bash:

    node index.js

    TEST YOUR DEVICE

    While still plugged in to your computer, reset your device, andmake sure your NodeJS server is running. To debug, watch boththe Arduino app serial monitor's output, and your bash window.Play with the photo sensor, covering it up and then exposing itto light. You should start receiving emails and SMS's.

    Once verified, you can take the USB programming module outof your TinyCircuit stack completely, as we no longer need it.You'll see 2 terminals on the processor board. Solder a LiPobattery JST connector to these terminals in order to have thisdevice run truly wire free and discreetly. Test again with theLiPo battery connected, then you're ready to use your finished

    device!

    OPTIONAL STEP3D print a case for your new contraption, to make it look geekprofessional.

    Suz Hinton is a software engineerby day, and maker by night. Havingdeveloped several delightful devicesover the years with the help of mi-crocontrollers, Suz is a big supporterof the recent IoT movement. Herinterests within the IoT sphere areon a data ownership and educationallevel. She believes that empowering

    people to utilize the latest technol-ogy and manufacturing techniqueswill see a high level of innovation andan increase in access to the field ofwearables, home automation, andmedical assistive devices.

  • 7/25/2019 NDC Magazine: The Developer

    12/682

    It all started so innocently just a little status field on an entity.

    Now, theres no way to know whether it was truly the root cause of all the

    pain that came afterwards, but there does seem to be some suspiciouscorrelation with what came next.

    By Udi Dahan

    Status fields

    on entities HARMFUL?

  • 7/25/2019 NDC Magazine: The Developer

    13/6813

    Kirill_M/Shutterstock

    Today, our batch jobs periodicallypoll the database looking for entitieswith status fields of various valuesand, when they do, the performanceof the front-end takes a hit (though

    the caching did help with the queries).Once upon a time it used to be man-ageable, but with over 50 batch jobsnow the system just cant get its headback above water.

    Even in the cases where we usedregular asynchronous invocations,the solution wasnt particularly solid it was enough for a server to restartand any work not completed by those

    tasks would be rolled back, and anymemory of the fact that we really

    need that task to be done gone alongwith it.

    And dont get me started on themaintainability or, more accurately,the lack of it. Every time someoneon the team made changes to somefront-end code, they invariably for-

    got at least one batch job that shouldhave been changed as well. And sincesome of these jobs can run hours anddays later, we didnt really know thatthe system worked properly when we

  • 7/25/2019 NDC Magazine: The Developer

    14/684

    deployed it. And dont bring up thatautomated testing thing again wevegot tests, but if the developer wasgoing to forget to change the code ofthe batch job, dont you think theydforget to change its tests too?

    If only I could say never again, butthis is the third system rewrite thatIve seen go bad in my career. And thealternative of continuing to battle anaging code base that gets ever moremonolithic isnt any more appealing.

    Its like were doomed.

    DID ANY OF THAT SOUNDFAMILIAR?If so, you might be able to take somecomfort in the fact that youre not

    alone. Misery does love company,after all.

    In any case, lets rewind the story backa bit and look at some of the inflectionpoints.

    As time goes by, the logic of manysystems gets more and more com-plex starting slowly with somestatus fields which influence whenlogic should be triggered. Together

    with that complexity, the executiontime of the logic grows often makingit difficult to for the system to keepup with the incoming load. Its at thatpoint in time that developers turn toasynchronous technologies to offloadsome of that processing.

    THE PROBLEM WITH ASYNC/AWAITWhen you have some work that cantake a long time, it is often appropri-ate to invoke it asynchronously so

    that the calling thread isnt blocked.This is most significant in web front-end scenarios where the threads needto be available to service incomingHTTP requests.

    Of course, as often is the case withweb requests, we do want to give theuser some feedback when the pro-cessing completes. This can be doneby marshalling the response from thebackground thread to the original web

    thread something that has beengreatly simplified with the async/await keywords in .net version 4.5.

    The issue is, as mentioned above, thatthere is no built-in reliability aroundthese in-memory threaded con-structs. If an appdomain is recycled, aweb server crashes, or any number ofother glitches occur not only is thework done on the async thread lost(if it didnt complete), but the manag-ing thread that knew what needed tohappen next is also gone, in essenceleaving our process in a kind of limbo.

    Interestingly enough, sometimesbatch jobs are created as a kind ofclean-up mechanism to fix these kindsof problems. Also, since batch jobsfrequently operate with databaserows as their input as well as theiroutput, it is believed that many ofthe reliability concerns of in-memory

    threading are resolved.

    But well get to that in a bit.

    First, lets talk about how the intro-duction of these batch jobs influencesour core business logic:

    THE IMPACT OF BATCH JOBS ONBUSINESS LOGICWhile developers and business stake-holders do understand that introduc-

    ing these batch jobs into the solutionwill increase the overall time that ittakes for a business process to com-plete, it is considered a necessaryevil that must be endured so that thefront-end can scale.

    Unfortunately, what is often over-looked is the fact that the businesslogic that used to be highly cohesivehas now been fragmented as shown inFigure 1.

    Now, if the logic was merely dividedinto two parts things might haveremained manageable, but often thelogic in the batch job is rewritten asstored procedures1in the databaseor using other Extract-Transform-Load2 (ETL) tools like SQL ServerIntegration Services3in an attemptto improve its performance. This issometimes exacerbated by the factthat a different team, one focusedon those other technologies, ends upmaintaining that batch job.

    And while a single batch job is likelynot considered to be all that evil, itdoes seem that when a second batchjob comes along they start to repro-duce. And their devilish spawn is whatultimately brings our system to its

    knees with logic scattered all overthe place.

    And, although each decision takenseemed to make sense at the time, itseems that this road to hell was alsopaved with plenty of good intentions.

    THE NOT-SO-NIGHTLY BATCHIt was not so long ago that businesswas conducted from 9 to 5 in a spe-cific time zone.

    You could assume that your userswouldnt need to access various sys-tems during off hours.

    But its a very different world thesedays a more connected and moreglobal world. Systems that used tobe accessed only by employees ofthe company have been opened upfor end-user access and these endusers want to be able to do anything

    Figure 1. Introducing a batch job into a system results in the fragmentationof business logic

  • 7/25/2019 NDC Magazine: The Developer

    15/68

    and everything on their own schedule,24x7. In an attempt to keep pace withend-user demand, employees havesimilarly transitioned to an alwayson mode of work, on top of increas-ing travel demands.

    In short, that once luxurious nightin which we could run our batch jobsuninterrupted has shrunken so muchover the past 20 years that its practi-

    cally nonexistent anymore.

    THE PERFORMANCE IMPACTThe whole idea of moving logic into anightly batch was so that it wouldntimpact the performance of the sys-tem while the users were connectedbut it seems that this has boomer-anged on us. Anybody who tries touse the system at a time when a batchis running gets significantly worseperformance than if we had kept the

    original logic running in real time asat that time the batch is processingall records rather than just the onesthat the user cares about.

    On top of that, the fact that a batchjob runs only periodically increasesthe end-to-end time of the businessprocess by that period. Meaning thatif you have a nightly batch, it could bethat a given business process thatstarted right as the nightly batch com-pleted would have to wait almost 24hours to complete.

    If there are additional batch jobs that

    pick up where other jobs left off as apart of an even larger enterprise pro-cess, that sequence of batch jobs cancause these processes to drag on overa period of days or weeks.

    And if we start looking at how failuresare dealt with, the picture begins tolook even more bleak.

    DEALING WITH FAILUREIf any job actually fails as that can add

    even further delays. This failure couldbe caused by some other transactionprocessing the same record at thesame time a concurrency conflict. To

    be absolutely clear, what this meansis that the records which are the mostactive are the most likely to fail.

    Unfortunately, there is no built-inway to have the processing retryautomatically so developers usuallydont remember (or put in the effort)to create one. Itll just be picked up inthe next cycle, they tell themselves.However it may be just as likely that a

    conflict will happen on the next cycleas it did in the last one. Every timesomething fails, thats another delayin the business process.

    RISKS AROUND PERFORMANCEOPTIMIZATIONSSometimes developers attempt tooptimize the performance of thebatch jobs in an attempt to get themto keep pace with the ever increas-ing number of records in the system.

    One of the techniques thats used is tooperate on multiple records within thesame transaction rather than doingone transaction per record.

    15

  • 7/25/2019 NDC Magazine: The Developer

    16/686

    While this does tend to improveperformance, it has the unfortunateside effect of increasing the impactof transaction rollbacks when fail-ures occur. Instead of just one recordreverting to its previous state, allthe records in that transaction getreverted.

    This increases the business impactas more instances of the businessprocess get delayed.

    Also, as mentioned above, since themost active records are the ones mostlikely to fail, and the activity levels ofvarious records fluctuate over time,it is quite possible that a given setof records will end up failing repeat-edly as conflicts occur for different

    records within the same set.

    In short, while more records can betheoretically processed per unit timewhen performing multi-record trans-actions, it may very well be the casethat the rate of successful recordprocessing actually decreases.

    So, how do we resolve all of theseissues?

    USE A QUEUE AND MESSAGING(PART 1)When developers hear the termqueue they usually think of tech-nologies like MSMQ, RabbitMQ, orsomething else that ends in the let-ter MQ (meaning message queue).While those are viable approaches, itis important to understand the archi-tectural implications of a queue first.

    A queue is a first-in, first-out (FIFO)data structure that enables differ-

    ent actors to interact in a decoupledmanner.

    The important thing about the mes-sages that are pushed into andpopped out of the queue is that theyare immutable meaning their valuesdont change.

    HOW QUEUES & MESSAGING AREDIFFERENT FROM DATABASESWhile it is common to have differ-

    ent actors reading and writing fromtables in a database, the differenceis that the entities in those tables aremodified by those actors meaningthat they are not immutable. In this

    sense, traditional batch operationsarent really using queuing or messag-

    ing patterns for their asynchronouscommunication.

    While it is perfectly feasible to imple-ment a queue on top of a regular tablein a database, it is important that thecode that reads and writes from thattable treats its contents as immuta-ble messages not as a master dataentity.

    For this reason, it is usually desirable

    to abstract the underlying techno-logical implementation of the queuefrom the application-level code something like an IQueue interfacewith Push and Pop methods thathave copy-semantics on the messageobjects flowing through the queue.

    QUEUE VS. DATABASE CONSIDERATIONSThe advantage of using a database-backed implementation of an abstractqueue is that all data continues to

    flow through the exact same persis-tence mechanism resulting in simplerdeployment, high availability, backup,and restore.

    The disadvantage of using the data-base is that, depending on yourdatabase technology, it may be moreexpensive to scale the database tomeet increasing performance needsthan a message queue. That beingsaid, just like there are numerous free

    and open-source message queues,there are also numerous free andopen-source databases.

    The main advantage of using mes-

    sage queuing technology is that itwas designed specifically to address

    these kinds of problems. Youll usuallyfind that queues are able to achievehigher throughput as well as give youbetter control around how messagesshould be delivered and processed.

    For example, you may have certainkinds of messages which representdata arriving from sensors at a highrate that dont actually have to bepersistent as you dont care if they getdropped in case of a server crash. A

    message queue enables you to definethese messages as non-durable andthus achieve much better performance.The main disadvantage to introduc-ing message queueing technology isthat it is another moving part in yoursystem something that administra-tors will need to learn how to config-ure, deploy, etc. That has its own cost(if the administrators arent alreadyfamiliar with it).

    LEAKY ABSTRACTIONS

    While you might think that havingthis abstraction will practically insu-late your system from the underlyingtechnological implementation, it isimportant to understand that oncethe system is live there will be manymessages flowing through it on anongoing basis.

    In order to switch from one implemen-tation of a queue to another (to/froma queue to/from a database), it isnt

    just a simple matter of changing theclass that implements the interface.You may need to drain the system meaning having it refuse any newrequests until it finishes processing all

  • 7/25/2019 NDC Magazine: The Developer

    17/6817

    the existing messages. This can meansignificant downtime for a system.

    Alternatively, you could write scriptswhich move all of the messages cur-rently in-flight from one persistentstore to the other. Like all data migra-tions, this can be tricky and should betried and tested in pre-productionenvironments sufficiently well beforeattempting it in production.

    USE A QUEUE AND MESSAGING(PART 2)If you start writing your applica-tion code using this kind of IQueueinterface, adopting a more explicitmessage-passing communication pat-tern in your system, is that many ofthe problems mentioned above will be

    much easier to solve lets see why.

    Once you use an explicit messageobject to pass data between actors(rather than having them polling enti-ties of various statuses and updat-ing those exact same entities), youreduce the contention on your mas-ter data entities. The message objectcommunicates the important statuschanges and can serve as a more for-mal contract between those actors.

    Just like any other interface in yoursystem, a message contract should

    be versioned carefully, taking intoaccount which consumers could beaffected.

    PROCESS STATE VS. MASTER DATAEntities with status fields often endup doing double-duty as both masterdata as well as holding the state ofsome longer-running business pro-cess. This probably isnt the best idea,as stated by the tried-and-true SingleResponsibility Principle4.

    If you have an entity with a status fieldwhere that status changes values overtime, that is usually an indication thatyou should create a separate process-centric entity that holds data relatedto the process that isnt necessarilymaster data. While it can sometimes

    be tricky to draw the line between thetwo, it is a worthwhile exercise.

    Dont be influenced by the need toshow the state of the process to theuser you can just as easily createa UI on top of a persistent processentity as you could on top of a regularbusiness entity.

    That being said, sometimes you canmodel these processes as a kind of

    event cascade as shown in Figure 2and Figure 3.

    Often in a system, a combination ofapproaches is used with some event-driven, publish/subscribe interactionand some process objects where werequire more control and visibility ofprogress.

    THE PERFORMANCE IMPACTRegardless of whether you use anactual message queuing technologyor a database, once you make yourmessages immutable, you will haveremoved some of the contention onthe entities (as they wont be servingdouble duty as both a message and anentity anymore).

    When taken together with build-ing blocks with their own databaseschema, the rest of the contention is

    removed thus enabling us to performthe processing in real time ratherthan as a batch.

    This combination can reduce businessprocess times from days and weekswhen performed as a series of batchjobs to minutes and seconds.

    DEALING WITH FAILUREThe second significant benefit ofmessage driven solutions is that many

    queues already have retry semanticsbuilt in so that even in the case ofmessage processing failure, not onlydo things roll back but they get pro-cessed again automatically (and in thesame real-time as before).

    Queuing technology usually has addi-tional capabilities in this area includ-ing the ability to move problematicmessages off to the side (into a poi-son letter queue) so as to free upthe processing of other messages.

    You can usually configure the policyaround how many times a messageneeds to fail before it is flagged as apoison message.

    TRANSITIONING FROM BATCH TOQUEUESThe good news is that if youre alreadymaking use of batch jobs, a lot of yourcode is already running asynchro-nously from previous steps thismakes introducing a queue much

    easier than if you had everything run-ning in one big long process in yourfront end.

    Figure 2. Batch processing moving forwards a business process

    Figure 3. Cascading events using a queue as a business process

  • 7/25/2019 NDC Magazine: The Developer

    18/688

    One challenge you may have, if a lotof the batch job logic was (re)writtenas stored procedures in the database,will be rewriting that logic back in youroriginal programming language. Youllusually get better testability alongthe way, but this can take some time.

    It is best to start your transition fromthe last batch job in the sequence andslowly work your way forwards. Thatway, youre not destabilizing criticalparts in the business process where itisnt clear what other jobs are depend-ing on them.

    When opportunities present them-selves for building new functionality,or extending an existing business pro-cess, look to create new events that

    you can publish and have a new sub-scriber run the logic for those eventsusing these new patterns. Hopefully,this will allow you to demonstratethe shorter time to market that thisapproach enables.

    IN SUMMARYEvery time you see yourself creatinga status field on a given entity, keepyour eyes peeled for some batch jobthat will be created to poll basedon that status. It would be better tocreate an explicit event that modelswhat happened to the entity and havea subscriber listening for that specificcase.

    If your system is built using .NET andyoud like to use a framework thatabstracts away the queuing systemas well as enabling you to run on topof regular database tables for simplerdeployment, take a look at NService-Bus. All of that functionality is avail-able out of the box as well as givingyou the ability to extend it for your

    own needs. Production monitoringand debugging tools are also avail-able for NServiceBus as a part of theParticular Service Platform.

    For more information, go tohttp://www.particular.net

    REFERENCES1) http://en.wikipedia.org/wiki/ Stored_procedure2) http://en.wikipedia.org/wiki/ Extract,_transform,_load3) http://en.wikipedia.org/wiki/SQL_

    Server_Integration_Services4) http://en.wikipedia.org/wiki/ Single_responsibility_principle

    Udi Dahan is one of the

    worlds foremost expertson Service-OrientedArchitecture and Domain-Driven Design and also thecreator of NServiceBus, themost popular service busfor .NET.

  • 7/25/2019 NDC Magazine: The Developer

    19/6819

    Database Lifecycle Management

    Your SQL Servers getsource control,

    continuous integration, automated

    deployment, and real time monitoring.

    You get fast feedback, safe releases,

    rapid development, and peace of mind.

    Find out howat www.red-gate.com

    @redgate /RedGateSoftwareTools /RedGateVideos

  • 7/25/2019 NDC Magazine: The Developer

    20/680

    By Mark Rendle

    Everything_possible/Shutterstock

    Dont IgnoreAzure StorageServices

  • 7/25/2019 NDC Magazine: The Developer

    21/6821

    Microsofts Azure cloud platform has come a very long way since the

    first public preview just five years ago. Back then, the only option for

    running applications was Cloud Services, which were hard to work withand could take half an hour to start. And the only option for data storage

    was Azure Storage Services, offering Blobs for unstructured data, Tables

    for structured data, and Queues for durable messaging.

    These days things are very different.There are multiple ways to runyour code (or other peoples), from

    Infrastructure-as-a-Service (IaaS)VMs, running Windows or Linux, tohigh-density Platform-as-a-Service(PaaS) Azure Web Sites. And for datastorage, youre spoiled for choice.For relational databases, Azure SQLDatabase is a good, solid service;ClearDB provide managed MySQL;you can run full SQL Server, or Oracle,or whatever database you like (hint:PostgreSQL) in an IaaS VM. You canget managed NoSQL solutions like

    RavenDB, MongoDB or Microsoftsjust-launch ed DocumentDB. Fast,non-persistent caching is nowavailable with the Redis Cache Service.Messaging can be handled usingService Bus or Redis.

    With this smrgsbord of servicesavailable, then, its easy to forgetabout the original Storage Services,but that would be a mistake. They weredesigned to provide massively scalableand cost-effective solutions, and they

    still do that today. In fact, theyremore cost-effective than ever, withper-gigabyte storage and bandwidthand per-transaction prices regularlybeing cut.

    So lets take a fresh look at these ser-vices, and talk about what you shoulduse them for to increase performanceand save money.

    BLOB STORAGE

    This is the old faithful, the servicethat everybody uses, even if they dontknow it. You can put any file in BlobStorage and its stored in at least three

    places; more if you have the Zone- orGeo-Redundancy switched on. If yourerunning any kind of VM, the underlying

    virtual disk (VHD) file is a page blob.But a lot of people, particularly thosewho have migrated an existing systemto Azure, still keep massive lumps ofbinary data in their SQL database.Dont do this. For one thing, its reallyexpensive; for another, if you do it withAzure SQL Database, youre reallygoing to hurt performance.

    Take the time to rewrite your file stor-age code to use Blob storage, and just

    store the URI in your database (Docu-mentDBs attachments feature actu-ally does this for you, automatically). Ifyoure using an off-the-shelf CMS orblog engine, most of them have plug-ins available to store uploaded filesin Azure Blobs. Youll save money andyour site or application will run faster.

    You can also use Blob Storage accountsas end-points for the Azure ContentDelivery Network (CDN), allowing youto upload your data to a single point

    and have it served through nearlythirty data-centres around the world.

    TABLE STORAGEAzure Table Storage is whats techni-cally known as a Key-Value store, likeCassandra or DynamoDB. That meansit scores highly for performance, scal-ability and flexibility, but not so greatfor complexity or functionality. AzureTables have no schema: an Azure Tableentity is a collection of key-value pairs,

    and entities within a table can have dif-ferent keys. There are no indexes, nocross-table joins, and no real transac-tional capabilities.

    So what can you use Table Storage for?All sorts of things! Anything where you

    just need really quick, scalable storage

    and retrieval of individual records. Inmy Azure applications, I use TableStorage for system concerns like Usertables, configuration, and Sessiondata. Its a great repository for loggingor audit data, and libraries like log4net,NLog and Serilog all support TableStorage as a target.

    My rule of thumb here is: start with thesimplest thing; if it works, use it; if youneed more features, look at the next

    simplest thing.

    QUEUE STORAGEAs mentioned earlier, there are nowa variety of messaging solutionsavailable in Azure, and Service Bus inparticular is far more powerful thanAzure Queue Storage. It supportstopics and pub/sub models, as wellas providing similar basic queuefunctionality. If you need: very lowlatency messaging; to send messagesto multiple receivers; to store

    messages for more than a week; or tosend messages larger than 64KB; thenuse Service Bus.

    If, however, your requirements aresimpler, Queue Storage is still a goodoption. The most common use case isfor scheduling background operationsto run in Worker roles or Azure WebJobs, but Ive also seen Queues usedas a data synchronization mechanismto push arbitrary data to on-premises

    systems. As with Tables, if QueueStorage meets your requirements thenuse it. If it doesnt, use Service Bus.

  • 7/25/2019 NDC Magazine: The Developer

    22/682

    FILE SERVICEThis is a recent addition to AzureStorage Services, and a very welcomeone. Until the File Service launched, ifyou wanted a persistent File Systemin a Cloud Service or Virtual Machine,you had to create a Cloud Drive: a VHDheld in Blob Storage as a Page Blob.These drives could be mounted tomultiple server instances at a time,but only one would have write access.And if you wanted to get the files offthe drive, youd have to download theVHD and mount it locally.

    File Service offers a much simplersolution. You create Shares, whichyou can then access as SMB (Samba)network drives from Windows or Linuxrunning in Azure. Multiple instances

    can read and write to these networkdrives simultaneously, and systemsoutside Azure can use a REST API orone of the SDKs to access the contentsdirectly.

    CROSS-SERVICE FEATURESAll these services benefit from severalfundamental features. You can choosefrom various redundancy levels, fromthe cheapest, Local Redundancy,which stores three copies of each

    piece of data within the same data-centre, through other options whichstore additional copies in another

    data-centre, and in the most powerfulcase, provide read-only access to thatdata through the secondary data-centre, which is great for reportingand data-warehousing.

    You can also opt-in to extensive met-rics and analytics that provide com-plete and comprehensive data onoperations, bandwidth and storagespace used for all services, down tominute-by-minute stats.

    With Blob, Table and Queue storage,you can create Shared Access Signa-tures (SAS) to allow explicitly-limited(e.g. read-only or write-only) directaccess to the service via the HTTPAPI or one of the many SDKs, so mobileapplications can store and retrieve

    data without you needing to maintainand run a web service. You can alsoset up Cross-Origin Resource Sharingrules so that sites running on specificdomains can use a SAS to read or writedirectly to the service as well.

    So, whether you already have systemsrunning in Azure, or are consideringbuilding or migrating an applicationto run there, take some time to seeif some or all of these services can

    boost your performance, increaseyour scalability, or just save you somemoney.

    Mark is the founder and CEOof Oort Corporation, a newcompany building cloud-basedsoftware for people who buildcloud-based software. Oort'sfirst product,Zudio, a web-based Windows Azure Storagetoolkit, launched in April 2013.Mark has been a Windows AzureDevelopment MVP for threeyears. In his spare time, Markwork's on the Simple.Data

    not-an-ORM and Simple.Webprojects, and wanders the worldspeaking at conferences anduser groups. Or he just geeksout learning new programminglanguages and frameworks; in2013 he's working a lot withTypeScript and AngularJS.

    Certified ScrumMaster - CSM2 days: 15. December in Oslo

    Certified Scrum Product Owner - CSPO2 days: 17. December in Oslo

    BECOME A CERTIFIED

    SCRUMMASTER OR

    A PRODUCT OWNERWITH MIKE COHN

    For complete course descriptions, time and place,visit www.programutvikling.no

  • 7/25/2019 NDC Magazine: The Developer

    23/6823

    Programming in Functional Style

    JAVA 8 IN A DAY

    24. November in Oslo with Venkat Subramaniam

    with Venkat Subramaniam

    This three day course, offered by award winning author and trainer VenkatSubramaniam, will get you programming in functional programming. This course is

    not a theory of why this is a good idea, but a practical deep dive into the keyaspects of functional programming. It will help you learn why, what, and exactlyhow to make use of this. Venkat is known for being a polyglot programmer and willdemonstrate the functional style of programming in languages that the attendeesuse at their work each day.

    The Java 8 release has brought the biggest and much needed change to thearguably most powerful programming language in mainstream use today. The newsyntactical additions are relatively small, but the semantic difference that makes

    is large. Why is that? What are the reasons why Java decided to introduce yetanother programming paradigm? How can we benefit from these new features?

    For sign up and complete

    course description visit

    www.programutvikling.no

    International Developers in London is a group of

    meetup events conceived in early 2013, starting with

    a group for Italian speaking developers in London.

    Since this early inception, Adam (the host) has

    organised events for French, Spanish, Portuguese

    and Polish speaking developers in London.

    Community is at the heart of what we do, with 2

    technical talks at every meetup (on diverse topicssuch as Arduino development, Javascript coding,

    Design, Agile methodologies and TDD to name

    a few) given by the members, for the members.

    Presentations are given in English to help people

    with their presentational skills.

    We will be represented by Adam in the community

    zone at NDC-London, or you can see more at

    www.idinlondon.co.uk.

    International Developersin London

  • 7/25/2019 NDC Magazine: The Developer

    24/684

    Amazon, Google, Wal-Mart, Yahoo,Facebook and Dell are all examples ofglobal behemoths that crunch a lot of

    data. They use data-driven insights todrive and shape ad campaigns, socialcampaigns, product placement etc.The quality and accuracy of their sys-

    tems is mainly driven by large amountsof data collected both externally andinternally. This intense crunching of

    data, thankfully leads to great opensource technology and best practicesthat are shared with to the outsideworld. Hadoop and Cassandra are just

    two examples in this industry. A com-mon trait of the tools are that theyare all linearly scalable it the terms

    of processing and storage and sup-ported by an external open sourcecommunity. However, the learningcurve is often quite steep before you

    We are living in the data age, with endless and ever growing amounts of

    data. It may sound like a clich; but data driven decision-making is one of

    the most important differentiators for successful businesses.

    By Tarjei Romtveit

    Unlock the value of your datawith ElasticSearch

  • 7/25/2019 NDC Magazine: The Developer

    25/6825

    can put them into production and manysmaller organizations have hesitatedto deploy them. But gaining actionableinsights from large amounts of datadoesnt have to be that difficult. Youdont necessarily need a full Hadoopenvironment to make sense of your

    data. Lets take a look at a typical usecase for a smaller company, and letssee how we can make sense of datawith the help of ElasticSearch. Elas-

    ticSearch is a great technology thatis both easy to learn, feature rich andwith great scalability out of the box.

    THE USE CASE: ACME INCACME Inc is a mid sized company thatsells electronic components through

    an online store and mainly utilizessocial media channels to target con-sumers. ACME collects a lot of infor-mation about their operations into an

    ERP system that contains massiveamounts of sales and inventory infor-mation. In addition, they store analyt-ics data from Facebook and the othermedia channels that they use for out-bound communications.

    We assume that the ERP system has aJSON API where you can export trans-actional history. The data format isshown in Fig. 1:

    Fig 1.

  • 7/25/2019 NDC Magazine: The Developer

    26/686

    Lets assume that all these entries aredownloaded into a file called aBun-chOfSalesData.txt and we have aninstallation of ElasticSearch runninglocally. We can then index the data intoElasticSearch through the bulk API byusing this curl command:

    curl -s -XPUT localhost:9200/

    _bulk --data-binary

    @aBunchOfSalesData.txt

    You can test that the insertion wascorrect by running a search query:http://localhost:9200/acme/sales/_search?q=*&pretty=yes

    At the same time, ACME has a systemfor querying the Facebook Insight APIevery hour for updates and store it ina document store. This can be indexedby using the bulk API and the format isshown in Fig 2.

    Lets say the CMO at ACME wants tounderstand if the money spent in theirFacebook ad-campaign actually gener-ates sales. The ad-campaign basicallyconsists of a product ad being writtenon the ACME Facebook page and pro-moted to all of their followers. SinceElasticSearch have both historic salesinformation and historic Facebook

    data, we can create a query to try toanswer the CMOs question. It would bebeneficial to get a timeline view withboth sales data and the most impor-tant metrics from Facebook. If thereare any positive correlations it couldindicate that the campaign did work.To perform this query we utilize thevery powerful aggregation feature,together with the equally strong dateand time functions in ElasticSearch.

    The JSON query we use to extract theinformation is displayed in Fig 3 on thenext page:

    Fig 2.

    F#VIA MACHINELEARNING WORKSHOPMathias Brandewinder

    Machine Learning and FunctionalProgramming are both very hot top-ics these days; they are also bothrather intimidating for the beginner.In this workshop, we'll take a 100%hands-on approach, and learn practi-cal ideas from Machine Learning, bytackling real-world problems andimplementing solutions in F#, in afunctional style.

    8. December 2 daysNOK 6900

    DESIGN ANDIMPLEMENTATION OFMICROSERVICESSam Newman

    Microservices Architecture is a con-cept that aims to decouple a solu-tion by decomposing functionalityinto discrete services. Microser-vice architectures can lead to easilychangeable, maintainable systemsthat can be more secure, performant

    and stable.28. November 1 day

    NOK 4900

    KOTLIN WORKSHOPHadi Hariri

    Kotlin is gaining a lot of traction.Close to release, there are alreadymany companies and individualsthat are shipping production codein Kotlin, with some having called itthe Swift for Android.12. January 1 dayNOK 4900

    F O R C O M P L E T E C O U R S E D E S C R I P T I O N S , T I M E A N D P L A C E , V I S I T W W W . P R O G R A M U T VI K L I N G . N O

  • 7/25/2019 NDC Magazine: The Developer

    27/6827

    Fig 3.

    At first glance, the query displayedin Fig 3 looks quite verbose, but thisis everything we need to be able tomake an hourly comparison of theFacebook post views and the nettotal sales during the past 48 hours.In most other systems you will needto run several queries and maybe do

    the hourly aggregations manually inthe application layer. Two of the rea-sons you do not need to run multiple

    sub-queries, is that you are able toquery two entirely different indexeswithout joins and apply the power-ful nested aggregation feature. Thenested aggregation feature is able tocalculate values based on the outputof its parent aggregation that com-pletely removes the need for traversal

    in ACMEs application layer. The resultof all these features is that you canadd data types as they emerge and

    just make small changes to a singlequery to be able to get comparableanswers immediately.

    The output of the query is a list ofall the hits and several aggregationscalled buckets. Each of these bucketscontains the aggregations of netTo-

    tal, views and any other data we wantto aggregate. To be able to visualizethe result of the queries, you can use

  • 7/25/2019 NDC Magazine: The Developer

    28/688

    the ElasticSearch Kibana dashboard.Another option is to feed the JSONdata to a slightly more sophisticatedgraphing tool like D3.js or GoogleCharts. An example graph output fromthe query we ran earlier is shown inFig 4.

    Fig 4 shows that the accumulated postviews are increasing heavily quite early

    in the morning. Around the same time,the amount of sales peaks at 70 000K NOKs for the campaigned product.However a few further questionsemerge by just looking at the data inthe graph. What triggers the peak latein the evening of 23.09? What otherproducts and product ranges are lev-eraged in the same period? How manycustomer transactions and how many

    items per transaction are there in thecampaign period? To answer thesequestions you may have to add somemore data and extend the query andrun the process again iteratively untilyour CMO is satisfied, but the oppor-tunities are endless.

    Fig 4.

    Tarjei Romtveit is a senior consultant and co-founder of Monokkel A/S.Monokkel is a consulting company that focuses on all things data.Before founding Monokkel; Tarjei was CTO at a Norwegian social mediaanalytics and intelligence start-up Integrasco A/S. In this companyTarjei solved big data related problems on a daily basis since 2006.

    CONCLUSIONWe have looked at a typical businessquestion, added some data and tried to

    query ElasticSearch for an answer. Wehave seen that ElasticSearch is hugely

    versatile and flexible in handling dif-ferent data types, and not only full-text search. The query language and

    APIs enable you to quickly exploreyour data and unlock the value of the

    data and rapidly improve your results.Go try it out yourself!

  • 7/25/2019 NDC Magazine: The Developer

    29/6829

  • 7/25/2019 NDC Magazine: The Developer

    30/680RonDale/Shutterstock

    By Rachel Laycock

    Architecture

    for CD

  • 7/25/2019 NDC Magazine: The Developer

    31/68

    Three months later the executives were asking where ismy Continuous Delivery?. We had failed to implement CDin any meaningful way. So what went wrong? How did wefail at something we knew how to do and had done on manyprojects before?

    Their code based was huge and complex - over 70 million

    lines of code and millions of dependencies. To be blunt,their architecture was a mess.

    There are specific considerations for your architecturewhen you are deploying early and often. You put yoursoftware in operational mode early - which is good. Butwhat does this mean to how you design and architect yoursystem? Especially if you would like to achieve continuousdelivery and you have existing balls of mud. My experi-ence left me with three considerations:

    1. Conways Law is the law

    2. Keep things small

    3. Evolve your architecture

    A few years ago a client asked for

    us (ThoughtWorks) to help them toimplement Continuously Delivery in

    their organisation. They were on six

    monthly release cycles which werepainful and fraught with risk,

    requiring the entire development,

    testing and operations team tocome in for a whole weekend to

    release their products, sometimes

    several if the first attempt wasn't

    successful. An executive read theContinuous Delivery book and

    decided they wanted to and needed

    to deliver much more rapidly to

    continue to compete in the market.

    We of course said sure. My team

    had all being delivering softwareusing the principles and practices of

    Continuous Delivery for years. But

    31

  • 7/25/2019 NDC Magazine: The Developer

    32/682

    CONWAYS LAW IS THE LAWOne of the main reasons software architecture is so hard isbecause of its fundamental link between an organisation'sstructure. Also known as the people and process. Once youget humans involved in any problem then you now have avery complex problem to solve.

    Conway's Lawstates: "organisations which design sys-tems ... are constrained to designs which are copies ofthe communication structuresof these organisations".

    The most explicit example I have seen of this is the mono-lith problem.

    Most organisations that have a monolith system withteams broken up into specialists like UI developers, Javadevelopers and Database developers, many of the issuesare at the seams of the communications between thesespecialisms. This problem presents itself in many ways;bottlenecks or wait time in your delivery cycle for any fea-ture that requires more than one specialised team, codebeing put in weird places just because it needs to be therebut your specialism cant actually check-in to any of thelayers that are owned by another team. The latter issue iseloquently described by Fred Brooks in the Mythical ManMonth, Because the design that occurs first is almost

    never the best possible, the prevailing system conceptmay need to change. Therefore, flexibility of organisa-tion is important to effective design. With a specializedinflexible team structure like this you are likely to eitherget stuck with your original design - good or bad - or yourabstractions will start to leak as you put thing like businesslogic wherever you can rather than where they should gofrom a sound design perspective.

    Ive seen many teams fall on the wrong side of ConwaysLaw and end up with the dreaded ball of mud architecture.But there is hope. You can leverage Conways Law to your

    advantage. Another client had a similar monolith problem...

    The had built an monolith application whose users werecountry based e.g. Australia, Italy, UK. Many of theirrequirements were the same, but many were region

    specific. Because they were all in the same monolith whenchanges were made it affected everyone regardless ofwhether than made sense to that region. This was caus-ing problems getting software out of the door. They weretripping over one another during release time, breakingshared components to meet their own needs. Eventuallythey had enough and decided they needed to make somefundamental changes in order to continuously delivery.

    First they re-organised their specialists to create featureteams; these teams were made up of all the skills theyneeded to create their full stack from UI developers tothe database. These new teams needed to figure out whatwas really shared and what was not, but initially they justneeded to get stuff to production. So, they forked thecode base. They now all had their own code, which meantthey could release independently, but it did not solve theproblem of figuring out what was shared. They had to cre-

    ate a scrum of scrums to decide as a group what werethe shared components. Because they had been burnedbadly by sharing they were very careful to be absolutelycertain it really was a fundamental component that wouldrarely change. They did this one small single responsibilityat a time. The end result of these shared components weremicro services.

    KEEP THINGS SMALLMicro-services can be loosely defined as fine grainedSOA or as some people refer to it SOA done right givenwhat we have learned about SOA done wrong and automa-tion of our deployments and infrastructure configuration,

    but that wouldnt be as catchy. They are small, run on inde-pendent processes and decoupled enough that they can bedeployed independently. Much of their appeal lies in beingable to use the right tool for the job instead of the goldenhammer approach and in the ability to build in resiliencethat would be difficult to do in a monolith system. They canalso be scaled independently and easily replaced as longas you keep them small and follow the single responsibilityprinciple. But this is not a free lunch

  • 7/25/2019 NDC Magazine: The Developer

    33/6833

    Microservices move communication and complexity intoyour infrastructure, which increases the cognitive loadof your architecture. And to get the resilience benefitsyou have to build it. Monitoring your system becomes asimportant as writing tests for it, you need a decent ver-sioning strategy and a very high level of automation of yourdeployment and environment configuration to deploy andrun these services in independent processes with confi-dence. Neal Ford put it best when he said recently You

    have to be this tall to use micro-services. Meaning a highlevel of maturity needs to be achieved in things like auto-mated and deployment before micro services can be foryou. Otherwise you will end up with a much worse problemthan the monolithic one.

    EVOLVE YOUR ARCHITECTUREFinally, as you will be operationalises your software earlyyou need to be able to evolve your architecture.

    Neal Ford in his IBM series talks about Evolutionary Archi-tecture and Emergent Design. There are 5 principles toEvolutionary Architecture:

    1. The Last Responsible Moment2. Architect for Evolvability3. Postels Law4. Architect for Testability5. Conways Law

    I recommend reading the series to get more details oneach of these principles. There are many considerationsto concern yourself with in the architecture of your system,but not all will be prioritised equally for the problems yourorganisation is trying to solve. These principles will guide

    you to only solve those you need to now and leave the restfor a later point in time, but not irresponsibly late!

    Since the world of Continuous Delivery has started tobecome the norm, we are finding ourselves in deployment

    and operational early and often. The world of softwarearchitecture has changed as a result. We now have to archi-tect for build, run and deploy. Structure your teams howyou want your software to look and create communicationstructures for the interconnected pieces. You will need tokeep working at this because you are unlikely to get it rightthe first time. None of us do.

    Rachel Laycock works for ThoughtWorksas a Market Technical Principal withover 10 years of experience in systemsdevelopment. She has worked on a widerange of technologies and the integra-tion of many disparate systems. Sinceworking at ThoughtWorks, Rachel hascoached teams on Agile and ContinuousDelivery technical practices and hasplayed the role of coach, trainer,technical lead, architect, and developer.She is now a member of the Technical

    Advisory Board to the CTO, whichregularly produces the ThoughtWorksTechnology Radar. Rachel is fascinatedby problem solving and has discoveredthat people problems are often moredifficult to solve than software ones.

  • 7/25/2019 NDC Magazine: The Developer

    34/684alphaspirit/Shutterstock

  • 7/25/2019 NDC Magazine: The Developer

    35/68

    The landscape of business has changed dramatically

    over the past several years. For decades, the focus

    has been on cost control and technology, whereas

    the current business climate has prompted aparadigm shift. Success for companies in the

    21st century is now dependent upon creativity

    and innovation, both hailed as the most important

    contributors to the growth of the economy.

    35

    By Denise Jacobs

    Creativity and Innovation:

    CRITICAL TOOLS FORSUSTAINED SUCCESS

    Creativity is the ability to developmeaningful new ideas throughexercising imagination and originality.Contrary to popular belief, creativity isnot relegated to a select few: we areall born creative. However, creativityis like a muscle: it grows stronger withrepeated practice and exercise andweaker with disuse.

    Innovation is the practice of makingchanges to that which is established,using creativity to enhance and improve

    upon known concepts, practices, orprocesses. Similar to creativity, aninnovation mindset becomes ingrainedthrough building the habit of thinking incertain ways, and is sustained througha supportive environment.

    PROBLEMS AND SOLUTIONSUnfortunately, despite recognizingcreativity and innovation as criticaltools for sustained success, manycompanies are slow to adapt to this

    new environment. At best, the fewbusiness leaders who truly understandhow critical it is to initiate the shiftwithin their organizations are at a lossat how to start the process. At worst,

    organizations merely pay lip serviceto the importance of creative thinkingand having an innovation mindset, butdo little to support it. These are thecompanies that encourage people totake risks and be innovative, but thenpunish them when they make mistakesor their ideas arent immediatelylucrative.

    The sad thing about both of theabove scenarios is the supremewaste talent and resources that they

    produce. Brilliant employees becomeunmotivated and bored, merely goingthrough the motions of their jobs,and the companys greatest source ofcreativity and innovation lay dormant.

    The good news is that leveragingthe untapped, latent talents of themembers of an organization canreverse this downward spiral. Throughreigniting your workforces creativespark and inspiring innovation at all

    levels from the top down and thebottom up companies will be wellon their way to improving employeeengagement by helping people feelconnected to and passionate about

  • 7/25/2019 NDC Magazine: The Developer

    36/686

    their work and their ability to meaningfully contributeto their company. Therefore, training your employees oncreative thinking and innovation skills so that they not onlyunderstand the power of creative thinking, but they alsohave the tools to do so should be a top priority for yourorganization.

    FOUR TIPS FOR REIGNITING CREATIVITY ANDINSPIRING INNOVATIONWhether you need to instill the creative spirit or reviveflagging creative inspiration with the members of yourorganization, here are four ways to do so:

    1. Help individuals and teams get unblocked creativelyOften, the biggest blocks to creative, innovative thinkingcome from fears: fear of making a mistake and fear offailure. Further, people often feel creatively stymied whenthey perceive that there are no proper outlets for sharingideas: that theyll be criticized for original thinking, or thattheir unique concepts will be dismissed.

    Make it a policy to be more open-minded and to suspendjudgment on ideas especially in the early stages andparticularly with the unusual or seemingly random ideas.Encourage spontaneity and experimentation, and givepeople the responsibility and freedom to make mistakes.

    The more comfortable people feel with being able to failand try again with fewer repercussions, the more they relaxtheir guard and allow ideas to flow. They will relearn how totrust their creative gut, and start the process of breakingdown their creative blocks.

    2. Advocate and practice effective communicationCreativity and innovation flourish most in groups wherethere is fantastic communication: the sharing, listeningto, and amplification of ideas not only amongst the teammembers with customers as well.

    Effective communication has two sides: listening andsharing. People with the best ideas are most often thosewho are adept listeners and so are the best leaders. They arestimulated by the concepts of others and connect the dotsin novel ways to create even better ideas. Listening wellrequires being present, giving people your full attention,

    and relaxing your own agenda. Doing so allows you to hearthe brilliance in others.

    Almost as important as listening is being able to clearlyarticulate and share ideas. In fact, companies with highlycreative cultures support their employees in idea-selling,because its not about good ideas. Its about selling thoseideas and making them happen, according to Marketingguru Seth Godin. Learning presenting skills a great way togive people the tools and the confidence to articulate andsell their concepts, making it easier for their great ideas togain traction within the team and company.

    By learning to become excellent listeners and generoussharers, team members will become master communicatorswho practice a dynamic, responsive, and generous sharingand exchange of ideas.

    3. Champion a culture of creative collaborationForget the mythology of the lone genius cranking outinnovations from a garage. People coming together to shareideas, compare observations, and brainstorm solutions tocomplex problems power inspired creativity and sustainedinnovation. Fostering and harnessing the creative abilitiesof a group produces a wider range of creative ideas andinnovative solutions arising from the range of knowledge,experience, and perspectives of the individuals of the group.

    One of the best ways to help the people within your teamsto create better together is to teach people to amplify

    the creative ideas of others. Our tendency is to try to findproblems with anothers ideas, which brings ideations tostandstill. However, ideas blossom when team members toget in the habit of responding to a offered idea not with aYes, but, but instead with a Yes, AND.

    6

    a

    lphaspirit/Shutterstock

  • 7/25/2019 NDC Magazine: The Developer

    37/6837

    Encouraging teaching and learning within the team is alsopowerful. Creating a culture of mentorship facilitates thebrain-share of the more experienced members with themore junior, and not only brings up the whole level of theteam, but also strengthens connections and trust, fuelingstrong collaboration down the road. Brand-new memberscoming into the team should be on-boarded with theinnovation mindset of the group, so that they come intotheir new work environment ready to develop and sharetheir ideas. The trust built by these dynamics binds the teammembers together in productive collaboration.

    4. Provide the resources to support and execute upongreat ideasIts one thing to talk the creativity and innovation talk, butto actually walk that talk, the very necessary resources oftime, money, manpower, training/methods, and materials

    must be available in order to support the workforce inimplementing their creative and innovative processes andproducts.

    Individuals and teams need the space to generate, develop,and experiment on ideas. Because of this, one of the mostcritical resources that encourage and support creativityand innovation is time. The best example of this is Googles20% time, which has created some of the most successfulGoogle products.

    But time is not enough. Make sure there is budget for takinggreat ideas to the next level. Ensure that your initiative tocultivate creativity and spur innovation is strengthened bytrainings and medium to long-term programs. Doing so willhelp build teams that are synergistic and can produce welltogether when its time to start moving their grand ideasto market.

    WELL-WORTH THE EFFORT

    Once you get on-board with promoting creativity andinnovation within your organization, youll wonder whyyou didnt do it sooner. Even in the short-term, youll beginto see benefits such as improved teamwork and teamcohesion, better employee engagement and productivity,increased attraction and retention of talented employees,and enhanced problem-solving and interaction.

    By providing the resources that sustain and grow anenduring culture of creativity and innovation, youll makeit safe for people in your organization to bust through theircreative blocks in order to grow their ideas and experiment

    with them, generously listen to and share ideas, andcreatively collaborate to generate innovative solutionsas a team. The companies that prevail in upcoming yearswill boast both a creative and innovative leadership andworkforce that, in tandem, will skyrocket the success ofthe companys products and services.

    So, are you ready to make a commitment to ignitingcreativity and inspiring innovation within your organization?Make the effort, and youll see how you can transform howyou and your team work for the better.

    37

    Denise Jacobs is a Speaker+ Author who speaks atconferences and consultswith companies world-wide. As the Founder +Chief Creativity Evange-list of The Creative Dose,she teaches techniques tomake the creative processmore fluid, methods formaking work environmentsmore conducive to creativeproductivity, and practicesfor sparking innovation.

    She is the author of TheCSS Detective Guide, andco-authored the SmashingBook #3 1/3 and Interact withWeb Standards. Denise is theChief Unicorn of Rawk The Web, and the Head Instigator of TheCreativity (R)Evolution.

  • 7/25/2019 NDC Magazine: The Developer

    38/68Photosani/Shutterstock

  • 7/25/2019 NDC Magazine: The Developer

    39/68

    The three principles and their interaction that sets Erlangapart are:* share nothing processes communicating with message

    passing* fail fast approach to errors; let the processes fail whenthey err

    * failure handling by using supervision of processes byother processes

    One thing is to understand what each of them means,another thing is to understand how they work togetherand yet another thing is how to apply them when develop-ing a program.

    In this article I will not go into lengthy explanation of howthe Golden Trinity of Erlang (see figure on the left) plays

    out, but rather focus on showing how to think with theprinciples as we look at the creation of an Erlang program.

    In this article we will look at processes and how to programwith asynchronous message passing. In later articles wewill deal with fail fast and failure handling.

    GAME OF LIFEConway's famous Game of Life (GOL)[https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life]is the example we will look at in this article.

    GOL is a zero-player game where a number of cellsarranged in a 2-dimensional grid evolve over time in astep-wise manner based on the configuration of the grid.

    The rules for the state - alive or dead - of a cell in the nextstep are simple: . Any live cell with less than 2 neighbours dies.

    . Any live cell with 2 or 3 neighbours survives. . Any live cell with more than 3 neighbours dies. . Any dead cell with exactly 3 neighbours becomes alive.

    The typical way of representing a GOL world is to use a not surprisingly 2-dimensional array.

    You could do that in Erlang too, but that is not the Erlangway!

    PROCESSES, PROCESSES, PROCESSESWhen you want something done in Erlang you throw pro-cesses at it.

    In object-oriented languages you think in objects. In Erlangthe cheap resource is processes, which the run-time pro-vides in a very lightweight manner allowing the creationof 136.000 processes on a Raspberry-Pi!

    So for GOL the Erlang way is to let each cell in the grid bea process.

    When computing the next value the cell needs to know thestatus of each of its neighbours. The only way the pro-cesses can communicate in Erlang is by message passing,

    so the cell process has to send a message to each of itsneighbours and collect the answers to count the numberof neighbours it has.

    You will hear people talking about Erlang being different, I, for one, am

    putting that statement forward. By being different I'm not referring to thepeculiar syntax Erlang has due to its first interpreter being written in

    Prolog. No. What makes Erlang truly different is the Golden Trinity of Erlang.

    39

    By Torben Hoffmann

    THINKING LIKE ANERLANGER Part 1

  • 7/25/2019 NDC Magazine: The Developer

    40/680

    query_neighbours(T, Neighbours) ->

    lists:foreach( fun(N) ->

    N ! {self(), {get, T}}

    end,

    Neighbours).

    Here Neighboursis a list of Pids and Tis the current timein the simulation. After sending messages to all its neigh-bours the process starts collecting answers and respond-ing to requests form its neighbours:

    collecting(#state{time=Time, content=C, xy=XY}=State, NeighbourCount, WaitingOn) ->

    receive

    {From, {get, Time}} ->

    C ->

    From ! {cell_content, {{XY,Time},C}},

    collecting(State, NeighbourCount, WaitingOn);

    {cell_content, {{{_X,_Y},Time}=XYatT, NeighbourContent}} ->

    collecting(State, NeighbourCount + NeighbourContent, lists:delete(XYatT, WaitingOn)) end.

    So if a neighbour queries a cell for its value ({get, Time})the response is sent back to the requesting process as atuple {cell_content, {{XY, Time}, C}}and the cell con-tinues its loop. If a reply comes back from one of the neigh-bours the NeighbourCountis updated and the neighbour isremoved from the list of neighbours the cell is waiting on.

    When there are no more neighbours to collect from the

    cell updates its state to the next point in time:

    collecting(#state{xy=XY, content=Content, time=T, history=History}=State, NeighbourCount, []) ->

    NextContent = next_content(Content, NeighbourCount),

    lager:info("Cell ~p changing to ~p for time ~p", [XY, NextContent, T+1]),

    State#state{content=NextContent,

    time=T+1,

    history=[{T, Content}|History];

    This code is straightforward: it figures out what the newcontent of the cell should be based on how many neigh-

    bours it has (the next_count/1 function) and then ince-ments time and history for the state of the cell.

    ALL IS WELL, RIGHT?

    Not really.

    We have dealt with the first problem when it comes tomaking processes communicate between one another:asking for information and replying to queries. If we runthis code one time step at a time it will be fine. We will getall the cells to update and once they are done we can do

    the next step.

    But if we let the cells run freely, i.e., proceed as fast asthey can from one time step to the next they will quicklycome out of sync. E.g., the first cell to go to time T=2will

    start asking for cell content that its neighbours have notcomputed yet and their requests for the cell content at T=1hits the cell it will not have a message clause to match theincoming request since it is now looking for {From, {get, 2}}messages and not {From, {get, 1}}messages.

    This is a problem that is painfully obvious and hits you reallyfast with the Game of Life simulation, but when you aredoing distributed systems this problem is often a lot harderto spot and tends to hit you when you least expect it.

  • 7/25/2019 NDC Magazine: The Developer

    41/6841

    So how does one go about solving such a problem withprocesses getting slightly out of sync without resortingto synchronous solutions?

    You have to provide two things in the processes:1. Dealing with being ahead of the rest.

    2. Dealing with being behind the rest.

    For our Game of Life we solve the problem of being aheadby introducing a function that uses the history of the cellto give replies to neighbours that are behind:

    content_at(Time, #state{xy=XY, time=Time, content=Content}) ->

    {{XY,Time}, Content};

    content_at(Time, #state{xy=XY, history=History}) when is_integer(Time), Time >= 0->

    {_, Content} = lists:keynd(Time, 1, History),

    {{XY, Time}, Content}.

    Solving being behind requires a bit more work. First wechangecontent_at/2to return the atom futureif the valuehas not been calculated yet:

    content_at(Time, #state{time=T}) when Time > T ->

    future;

    Now we can write the handling of requests like this:

    receive

    {From, {get, Time}} ->

    case content_at(Time, State) of

    future ->

    collecting(State#state{future=[{From, Time}|State#state.future]},

    NeighbourCount, WaitingOn);

    C ->

    From ! {cell_content, C},

    collecting(State, NeighbourCount, WaitingOn)

    end;

    ...

    So if someone wants something that we consider thefuture we add the request to the futurelist in our stateand carry on.

  • 7/25/2019 NDC Magazine: The Developer

    42/682

    Those future requests collected are then resolved whenthe cell computes its next step:

    collecting(#state{xy=XY, content=Content, time=T, history=History, future=Future}=State, NeighbourCount, []) ->

    NextContent = next_content(Content, NeighbourCount),

    NewFuture = process_future(XY, T+1, NextContent, Future),

    lager:info("Cell ~p changing to ~p for time ~p", [XY, NextContent, T+1]),

    State#state{content=NextContent,

    time=T+1,

    history=[{T, Content}|History],

    future=NewFuture};

    The process_future/2 function simply sends replies tothose waiting on this new value:

    process_future(XY, Time, Content, Future) ->

    {Ready, NewFuture} = lists:partition( fun({_Pid,T}) ->

    T == Time end,

    Future),

    lists:foreach( fun({Pid,_}) ->

    Pid ! {cell_content, {{XY,Time}, Content}}

    end,

    Ready),

    NewFuture.

    It takes all those waiting on the current Time and sendsthem a message and keeps the rest as they are even fur-

    ther out in the future.

    WRAPPING UPDealing with asynchronous message passing can be a bittricky, but the rewards in terms of making your systemmore scalable are so great that it is worth it.

    The Reactive Manifesto [http://www.reactivemanifesto.org]also embraces asynchronous message passing and Erlangis a wonderful language for doing reactive systems in thatspirit.

    If you want to play with the entire code base for Game of

    Life you can go to [https://github.com/lehoff/egol] andclone it. The tag ndc1has the commit that was used tocreate the final version of the code for this article.

    In part 2of Thinking like an Erlanger I will look at fail fastand supervision so that we can make Game of Life morerobust.

    Torben is the CTO of Erlang Solutions and hasbeen working with Erlang in Motorola and Issuuas technical architect and developer since 2006.He has talked about his Motorola achievementsat Erlang eXchange 2008 and EUC 2010. He hasbeen holding the Erlang banner high as a selfconfessed Erlang Priest at several conferencessuch as CodeMesh, Build Stuff, Goto, Craft andLambda Days. Before becoming an Erlanger heworked with software quality assurance, processimprovement and people management.

  • 7/25/2019 NDC Magazine: The Developer

    43/68

    We transform how you and yourteam workfor the better.

    Discover our leading-edge approach to creative inspiration and thinking,

    idea generation and execution, and effective communication and team

    cohesion that will skyrocket engagement, innovation, and productivity.

    Were available and at the ready for workshops, consulting, keynotes/

    playnotes/speaking, and coaching.

    Work Better. Produce More. Create Betterness.

    Isnt it time to reignite yourteams creative spark, inspire

    innovation, and cultivatecollaboration?

    We can help.

    TheCreativeDose.com A CREATIVITY + INNOVATION COLLECTIVE

    Come see us in action at NDC People, February 2015 in Oslo, Norway!

    For more information and to register, visit people.ndcevents.com.

  • 7/25/2019 NDC Magazine: The Developer

    44/684

    Ive been working with Scrum since its earliest days and over the past few

    years have had the privilege of getting to sit on the board of directors

    of the Scrum Alliance. From this vantage point I have observed that

    collectively as an industry, we suffer from Scrumbutaphobia. This is

    the fear that we are doing Scrum wrong and are not following the Scrum

    rules also know as Scrum But. (When asked if you are using Scrum inyour organization and answer: I am doing Scrum, but)

    By Stephen Forte

    Create a CUSTOMAGILE PROCESSfor Your Organization

    Ollyy/Shutterstock

  • 7/25/2019 NDC Magazine: The Developer

    45/6845

    This fear comes from the fact that we all have taken Scrumto its limits, modified it beyond the rules to suit our needs,and implemented something that looks and feels like Scrumin our organization-usually with success. That said, evensuccessful organizations have Scrumbutaphobia.

    OVERCOMING SCRUMBUTAPHOBIAThis is crazy; Agile is all about embracing change! Whenasked what kind of methodology they were using in theirorganization in an industry survey, organizations adhering

    to a single Agile methodology was only 31%. Organizationsusing mixed methodologies, both Agile and non-Agile, was67%. You are not alone.

  • 7/25/2019 NDC Magazine: The Developer

    46/686

    Inste