Bitcoin ATM - yep, this is an ATM that takes in real fiat money and loads bitcoins to your wallet. You hold your wallet’s QR code to the machine and that’s how it gets the address to send the bitcoins to. The price you pay is based on a feed from exchanges, I think they are using Bitstamp. I price was like $127/BTC last weekend at the Crypto Currency Conference in Atlanta.
Windows Service Bus offers a powerful durable messaging backbone for distributed and cross platform systems. This is a quick example how to use it to send messages across machines using Topics and Subscriptions.
Example Scenario: employees of service provider X need to be able to manage styles for their customer branded applications and build new applications in real time.
The old way: employees would save branding/styles and then manually build a new version of applications that use the styles. This was a costly and time consuming process.
The solution: automate the building of the applications every time styles are saved/updated. Use service bus to provide a messaging mechanism that will link the web front end application with the build server.
Sequence Diagram
Initialize Topics & Subscriptions
The first thing we need to do is ensure the topics we want to send messages on exist and the subscriptions we want to use also exist. Here is a simple initialization method we call to ensure these are set up.
public static void InitializeServiceBus(NamespaceManager namespaceManager) { if (!namespaceManager.TopicExists(Constants.Topics.BuildRequestTopic)) namespaceManager.CreateTopic(Constants.Topics.BuildRequestTopic); if (!namespaceManager.TopicExists(Constants.Topics.BuildResponseTopic)) namespaceManager.CreateTopic(Constants.Topics.BuildResponseTopic); if (!namespaceManager.SubscriptionExists(Constants.Topics.BuildRequestTopic, Constants.Subscriptions.BuildRequestSubscription)) namespaceManager.CreateSubscription(Constants.Topics.BuildRequestTopic, Constants.Subscriptions.BuildRequestSubscription); if (!namespaceManager.SubscriptionExists(Constants.Topics.BuildResponseTopic, Constants.Subscriptions.BuildResponseSubscription)) namespaceManager.CreateSubscription(Constants.Topics.BuildResponseTopic, Constants.Subscriptions.BuildResponseSubscription); }
Topics in Azure
Style Controller
Using ASP.Net MVC we’ll create a controller that will handle the “save changes” event. This will capture the new style and send a “build request” to the service bus.
public class HomeController : Controller { private readonly NamespaceManager namespaceManager; private readonly TopicClient buildRequestTopic; private readonly IStorageProvider storageProvider;
public HomeController() { namespaceManager = NamespaceManager.Create(); InitializeServiceBus(namespaceManager); buildRequestTopic = TopicClient.Create("BuildRequestTopic"); storageProvider = new FileSystemStorageProvider(); } private void Save() { var tenant = Request["tenant"]; var css = Request["css"]; var cssData = Encoding.Default.GetBytes(css); var bundleId = storageProvider.Store(cssData, tenant); var message = new BrokeredMessage(); message.Properties.Add("bundleId", bundleId); message.Properties.Add("tenant", tenant); buildRequestTopic.Send(message); } }
You can see the code above uses the NamespaceManager and TopicClient to create a topic to send requests on. The HTML page will provide a simple UI to change the CSS for a given tenant, and a way to kick off the build of the application based on the changed CSS.
UI to Manage Style
When the build button is clicked the “Save” method in the code snippet above is called, and a message is sent to the service bus. Another process (the “Worker” process) will pick up that event and process the change as a new build.
The Worker Program
The worker code is very simple. It basically just starts up a subscription and listens for build events, then responds to a build event by building an application, storing it, and sending back a build done event.
public static void Main() { IStorageProvider storageProvider = new FileSystemStorageProvider(); var namespaceManager = NamespaceManager.Create(); InitializeServiceBus(namespaceManager); var buildResponseTopic = TopicClient.Create(Constants.Topics.BuildResponseTopic); var client = SubscriptionClient.Create(Constants.Topics.BuildRequestTopic, Constants.Subscriptions.BuildRequestSubscription); client.Receive(); while (true) { var message = client.Receive(); if (message != null) { try { var bundleId = (string)message.Properties[Constants.Properties.BundleId]; var tenant = (string)message.Properties[Constants.Properties.Tenant]; Console.WriteLine("Got bundleId: " + bundleId + ", for tenant: " + tenant); var cssData = storageProvider.Get(bundleId); var css = Encoding.Default.GetString(cssData); var appBuild = BuildApplication(css, tenant); var appBuildId = storageProvider.Store(appBuild, tenant); Console.WriteLine("Built application, buildId: " + appBuildId); var response = new BrokeredMessage(); response.Properties.Add(Constants.Properties.BundleId, bundleId); response.Properties.Add(Constants.Properties.BuildId, appBuildId); buildResponseTopic.Send(response); // Remove message from subscription message.Complete(); } catch (Exception) { // Indicate a problem, unlock message in subscription message.Abandon(); } } } }
Push Notification (Build Done Event)
Finally if we switch back to the UI application, we’ll notice a separate thread is started in the Global.asax “Application_Start” method.
protected void Application_Start() { AreaRegistration.RegisterAllAreas(); WebApiConfig.Register(GlobalConfiguration.Configuration); FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters); RouteConfig.RegisterRoutes(RouteTable.Routes); (new Thread(ListenForBuildResponses)).Start(); }
This will start a background thread to listen for build finished events, and notify the UI when a build is done (via Signalr)
private void ListenForBuildResponses() { var namespaceManager = NamespaceManager.Create(); Core.Utilites.InitializeServiceBus(namespaceManager); var client = SubscriptionClient .Create(Core.Constants.Topics.BuildResponseTopic, Core.Constants.Subscriptions .BuildResponseSubscription); client.Receive(); while (true) { var message = client.Receive(); if (message != null) { try { var buildId = (string) message.Properties["buildId"]; var hubContext = GlobalHost.ConnectionManager.GetHubContext(); hubContext.Clients.All.buildDone(buildId); // Remove message from subscription message.Complete(); } catch (Exception ex) { // Indicate a problem, unlock message in subscription message.Abandon(); } } } }
Finally we add a little JavaScript to the HTML page so SignalR can push a build done notification to the UI.
$(function () { var notificationHub = $.connection.notificationHub; notificationHub.client.buildDone = function (buildId) { $("#target") .find('ul') .append($("").html("Build Done: " + buildId + "")); }; $.connection.hub.start(); });
At this point the UI displays the download link and the user downloads the newly built application.
Part of building stateless systems that scale horizontally is using a distributed cache (where state is actually stored). This guide outlines the different types of items one will probably need to cache in a system like this, where to cache it (local or distributed), how to use the cache, what timeouts to use, etc.
Horizontal Scale
First lets review what a horizontally scaled system looks like.
Machine 1 and 2 accept requests from the load balancer in an un-deterministic way, meaning there is no affinity or sticky sessions. So the machines need to be stateless, meaning they don’t manage state themselves. The state is stored in a central place, the distributed cache. The machines can be taken offline without killing a bunch of user session, and more machines can be added and load can be distributed as needed.
Types of Cache
Notice there are two types of caches here:
1) Local caches - these are in-memory caches on each machine. This is where we want to store stuff that has long running timeouts and is not session specific.
2) Distributed cache - this is a hi performance cluster of machines with a lot of memory, built specifically for providing a out of process memory store for other machines/services to use. This is where we want to sure stuff that is session specific.
Using Cache
When using information that is cached, one should always try to get the information from the cache first, then get from source if it’s not cached, store in cache, and return to caller. This pattern is called the read through cache pattern. This ensures you are always getting data in the most efficient means possible, before going back to the source if needed.
Cached Item Lifespan
There are basically two things to think about when thinking about cached items lifespan.
1) How long should something remain in the cache before it has to be updated? This will vary depending on the type of data cached. Some things like the logo of a tenant on a multi-tenant system should have a long timeout (like hours or days). While other stuff like an array of balances in a banking system should have a short timeout (like seconds or minutes) so it is almost always up-to-date.
2) When should stuff be removed from cache? You should always remove stuff from cache if you know you are about to do something that will invalidate the information previously cached. This means if you are about to execute a transfer, you should invalidate the balances because you’ll want to get the latest balances from the source after the transfer has happened (since an update has occurred). Basically any time you can identify an action that will invalidate (make inconsistent) something in the cache, you should remove that item, so it can be refreshed.
Designing Cache Keys
You should take the time to design a good cache key strategy. The strategy should make it clear for your development team how keys are constructed. I’ll present one way to do this (but not the only way). First think about the types of data you’ll be caching. Lets say a typical multi-tenant system will consist of the following categories of cached items:
1) Application- this is stuff that applies to the whole system/application.
2) Tenant - this is stuff that is specific to a tenant. A tenant is a specific organization/company that is running software in your system.
3) Session - this is stuff that is specific to a session. A session is what a specific user of an organization creates and uses as they interact with your software.
The whole point of key design is to figure out how to develop unique keys. So lets start with the categories. We can do something simple like Application = “A”, Tenant = “T”, Session = “S”. The category becomes the fist part of the cache key.
We can use nested static classes to define parts of the key, starting with the categories. In the code sample above we start with a Application class that uses “A” as the KeyPattern. The next we build a nested class
"Currencies" which extends the KeyPattern with it’s own unique signature. Notice that the signature in this case takes in parameters as to create the unique key. In this case we are using page and page size to build the key. This way we can cache a specific set of results to a query that uses paging. There is also a property to get the TimeToLive and another to construct the key, based off the pattern.
The above example is caching stuff in a “local cache”, not a distributed cache. This is because the information in this example is not specific to a user or session. So it can be loaded on each machine which can keep a copy of it there. Generally you want to do this for anything that doesn’t need to be distributed, because it performs much better (think local memory vs. serialization/deserialization/network, etc).
When thinking about unique keys for things like session, you should consider putting the session identifier as an input to the key, since that should guarantee uniqueness (per session). Remember you basically just have a really big name/value dictionary to fill up. But you have to manage the uniqueness of the keys.
Takeaways
1) Use both a local and distributed cache. Only put session or short lived stuff in the distributed cache, other stuff cache locally.
2) Set appropriate timeouts for items. This will vary depending on the type of information and how close to the source it needs to be.
3) Remove stuff from cache when you know it will be inconsistent (like updates, deletes, etc).
4) Take care to design cache keys that are unique. Build a model of the type of information you plan to cache and use that as a template for building keys.
Great to see more advanced applications like IDEs being moved to the cloud. This one currently only supports the Node JS/Javascript/Python stack.
I’ll be the first to sign up if they offer support for .Net. Possible business venture for someone else? Replace Visual Studio with an awesome browser based equivalent, that’s a powerful idea.
I’ve needed a clock that shows multiple time zones so I can schedule meetings with remote offices during times that overlap regular business hours. I couldn’t find anything on the market that did that, so I decided to build this product myself. This blog post shows how it was built.
Programming with .Net Gadgeteer
The software was written in C# for the .Net Micro Framework. It uses hardware that is compatible with the .Net Gadgeteer platform.
Shematic Diagram
This is the view from the designer in Visual Studio
Location Configuration
Each RFID card has an associated location stored on the Micro SD card. Here is an example of the configuration file stored on the card:
<configuration> <appSettings> <add key="LogLevel" value="Debug" /> <add key="Wifi.Network" value="ssid-here" /> <add key="Wifi.Password" value="network-password-here" /> <add key="RFID.4D00559A66.Location" value="Portland, OR" /> <add key="RFID.4D006CE088.Location" value="Georgia, GA" /> <add key="RFID.4D005589A1.Location" value="Auckaland, New Zealand" /> <add key="RFID.4D0055D211.Location" value="Bangalore, India" /> <add key="RFID.4D0055D01C.Location" value="Tel Aviv, Israel" /> <add key="RFID.4D00558F43.Location" value="London, UK" /> </appSettings> </configuration>
You’ll notice the pattern of “RFID.car id.Location”, the “card id” is what is read when you place an RFID card over a reader. This is used to get the corresponding location, like “Portland, OR” from the configuration file. The location is then used to get the current time and sun profile.
private TimeZoneInfo GetTimeZone(string cardId) { string location; TimeZoneInfo timeZoneInfo; if (!locations.Contains(cardId)) { var cacheKey = "RFID." + cardId + ".Location"; location = configurationManager.GetSetting(cacheKey); locations.Put(cardId, location); } else location = (string) locations.Get(cardId); if (!timeZones.Contains(cardId)) { var geoPoint = geoLocationService.GetLocationGeoPoint(location); timeZoneInfo = geoTimeZoneService.GetTimeZoneInfo(geoPoint); timeZones.Put(cardId, timeZoneInfo); } else timeZoneInfo = (TimeZoneInfo) timeZones.Get(cardId); return timeZoneInfo; }
The current time is displayed on the LED matrix modules. The sun profile is used to display “sunny hours” with blue dots.
Green dots are used to show “standard work hours” (8am to 5pm / Mon-Fri). This is helpful when arranging ad hock meetings with various locations because it gives a quick indicator when there will be overlap during standard business hours.
The main part of the program
private void ProgramStarted() { configurationManager = new XmlConfigurationManager(sdCard); logger = new DebugLogger(configurationManager); networkManager = new WifiNetworkManager(wifi, configurationManager, logger); timeManager = new NativeTimeManager(configurationManager, logger); geoLocationService = new GoogleGeoLocationService(logger); geoTimeZoneService = new EarthToolsGeoTimeZoneService(logger); bitmapProvider = new DoubleNumberBitmapProvider(); RFID1.DebugPrintEnabled = true; RFID2.DebugPrintEnabled = true; RFID3.DebugPrintEnabled = true; RFID1.CardIDReceived += (sender, id) => { if (timeZoneId1 == id) return; timeZoneId1 = id; multipleTimeZoneDisplay.UpdateTimeZoneForRow(0, GetTimeZone(id)); }; RFID2.CardIDReceived += (sender, id) => { if (timeZoneId2 == id) return; timeZoneId2 = id; multipleTimeZoneDisplay.UpdateTimeZoneForRow(1, GetTimeZone(id)); }; RFID3.CardIDReceived += (sender, id) => { if (timeZoneId3 == id) return; timeZoneId3 = id; multipleTimeZoneDisplay.UpdateTimeZoneForRow(2, GetTimeZone(id)); }; networkManager.Connected += (sender, args) => { timeManager.ApplySettings(); timeManager.StartTimeService(); }; timeManager.TimeServiceStarted += OnTimeServiceStarted; timeManager.MinuteChanged += (sender, args) => multipleTimeZoneDisplay.WriteCurrentTime(); networkManager.Connect(); }
As you can see in the above code each RFID card can raise an event “CardIDReceived”, which is programmed to update the display for the specific row it’s on.
There are several “managers” that abstract the details of things, like a geoTimeZoneService that integrates with Earth Tools to get the current offset (daylight time) and sunrise/sunset hours. Another geoLocationService that integrates with Google to get latitude and longitude for a given location. The timeManager synchronizes time with a time server. Finally, the wifiNetworkManager is used to establish a internet connection with a local Wifi network.
Parts / Costs
The whole thing has about $500 worth of electronics, and about $75 worth of wood. Here is the invoice of some key parts I bought from GHI Electronics:
- FEZ Spider Mainboard (1 @ $119.95) $119.95
- Gadgeteer Standoff Pack (3 @ $1.95) $5.85
- Extender Module (1 @ $4.95) $4.95
- 5x Breakout Module Set (1 @ $4.99) $4.99
- USB Client DP Module (1 @ $24.95) $24.95
- RFID Reader Module (3 @ $24.95) $74.85
- SD Card Module (1 @ $6.95) $6.95
- LED Matrix Module (DaisyLink) (6 @ $19.95) $119.70
- WiFi RS21 Module (1 @ $79.95) $79.95
From Amazon.com, you can find the RGB LED strips:
- 1m Addressable RGB LED Strip (about $32 x 3 = $96)
“Nothing tends to materialize man, and to deprive his work of the faintest trace of mind, more than extreme division of labour.”
GCM is Google’s new cloud messaging system, that is replacing C2DM.
Here is the bare bones, quick tutorial on how to get notifications working.
Register Application with Google
First go to Google’s API console. Setup a new project for your application.
Enable “Google Cloud Messaging for Android” service.
Click the “API Access” link and grab the API Key for later use.
Write the Client Side Code
In your AndroidManifest.xml you’ll need setup your configuration like this:
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="fiserv.mobile.poc" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="10" android:targetSdkVersion="15" /> <permission android:name="fiserv.mobile.poc.permission.C2D_MESSAGE" android:protectionLevel="signature"/> <uses-permission android:name="fiserv.mobile.poc.permission.C2D_MESSAGE" /> <!-- App receives GCM messages. --> <uses-permission android:name="com.google.android.c2dm.permission.RECEIVE" /> <!-- GCM connects to Google Services. --> <uses-permission android:name="android.permission.INTERNET" /> <!-- GCM requires a Google account. --> <uses-permission android:name="android.permission.GET_ACCOUNTS" /> <!-- Keeps the processor from sleeping when a message is received. --> <uses-permission android:name="android.permission.WAKE_LOCK" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name=".MainActivity" android:label="@string/title_activity_main" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <receiver android:name="com.google.android.gcm.GCMBroadcastReceiver" android:permission="com.google.android.c2dm.permission.SEND" > <intent-filter> <action android:name="com.google.android.c2dm.intent.RECEIVE" /> <action android:name="com.google.android.c2dm.intent.REGISTRATION" /> <category android:name="fiserv.mobile.poc" /> </intent-filter> </receiver> <service android:name=".GCMIntentService" /> </application> </manifest>
Now develop an Intent Service, this is a service that will handle all aspects of communication with GCM. Just extend GCMBaseIntentService Here is why your Intent Service should look like:
package fiserv.mobile.poc; import android.content.Context; import android.content.Intent; import android.util.Log; import com.google.android.gcm.GCMBaseIntentService; public class GCMIntentService extends GCMBaseIntentService { @Override protected void onError(Context arg0, String arg1) { Log.e("Registration", "Got an error!"); Log.e("Registration", arg0.toString() + arg1.toString()); } @Override protected void onMessage(Context arg0, Intent arg1) { Log.i("Registration", "Got a message!"); Log.i("Registration", arg0.toString() + " " + arg1.toString()); // Note: this is where you would handle the message and do something in your app. } @Override protected void onRegistered(Context arg0, String arg1) { Log.i("Registration", "Just registered!"); Log.i("Registration", arg0.toString() + arg1.toString()); // This is where you need to call your server to record the device toekn and registration id. } @Override protected void onUnregistered(Context arg0, String arg1) { } }
Finally in your Main Activity’s onCreate operation, call a method that looks like this:
private void RegisterWithGCM() { GCMRegistrar.checkDevice(this); GCMRegistrar.checkManifest(this); final String regId = GCMRegistrar.getRegistrationId(this); if (regId.equals("")) { GCMRegistrar.register(this, SENDER_ID); // Note: get the sender id from configuration. } else { Log.v("Registration", "Already registered, regId: " + regId); } }
Notice the SENDER_ID needs to come from configuration. You can get this value from the Google API Console. Just grab it off the end of your project URL.
Now fire up your app, it should register with GCM and print out your registration ID. You’ll want to copy that ID so you can send a test notification to your device.
You’ll want to record this registration id with your server, so you can start sending notifications from your server.
Send Notifications from Server
Here is a C# / .Net server example for sending notifications to GCM:
using (var wc = new WebClient()) { wc.Headers.Add("Authorization", "key=" + serviceKey); var nameValues = new NameValueCollection { {"registration_id", registrationId}, {"collapse_key", Guid.NewGuid().ToString()}, {"data.payload", HttpUtility.UrlEncode(message)} }; var resp = wc.UploadValues("https://android.googleapis.com/gcm/send", nameValues); var respMessage = Encoding.Default.GetString(resp); loggingServiceClient.Debug("Got respose from GCM: " + respMessage); }
This is just a simple example to show exactly how you send the message, I recommend using a framework like PushSharp when you implement this for real.
Notice the “serviceKey” variable in this example should be the value you got from Google API Console.
Once you send your message you should see log statements in your Android console or hit a breakpoint in your onMessage method of your Intent Service.
Note: make sure you are logged in with a Google account on you device, since that’s a requirement for GCM to work.
“Daylight saving time was invented by George Vernon Hudson in New Zealand. Its intention is to save money by reducing the costs of keeping lights on at night. Today, its main purpose is to make software programmers’ lives hard.”
Quick reminder on how API versioning should work and what the numbers mean.
Consider a version format of X.Y.Z (Major.Minor.Patch)
- Bug fixes not affecting the API increment the patch version.
- Backwards compatible API additions/changes increment the minor version.
- Backwards incompatible API changes increment the major version.