Friday, November 15, 2013

Windows Development Box on Azure

Anxious to try out Microsoft's new development platform? A quick way to get started is to create a virtual machine hosted by Azure.

There are many promotions available, such as BizSpark which make it very affordable. The best news is that you don't pay for a virtual machine when it is turned off! (This is a big improvement over other virtualization providers in the past.)

Windows Server 2012 R2 

Create a new Virtual Machine from an image of Windows Server 2012 R2. Why not one of the Visual Studio on Server 2012 gallery images? In short, Windows Store development is harder to get working. I could not get the simulator to run as it kept asking me to re-authenticate. (I still can't get the simulator to run under R2, but the app can run natively.)

Connect

Select your new VM in the list and you should see a Connect button at the bottom of the page. Click that to get an .rdp file populated with the address and port of your new machine. Import that .rdp file into your client if you desire to save your username and password.

Microsoft has improved the RDP clients for Apple operating systems. If you use OS X or iOS I encourage you to try the free apps available in the App Store on both platforms. (Using the new touch-optimized Start menu is actually pleasant on the iPad as it recognizes the swipe gestures.)

Install Desktop Experience Feature


  1. Add Roles and Features Wizard
  2. Features page
  3. User Interface and Infrastructure
  4. Desktop Experience

Disable IE Enhanced Security


  1. Server Manager
  2. Local Server
  3. IE Enhanced Security Configuration: Off (both Admin & Users)

Download & Install Visual Studio 2013

If you try to download VS before disabling IE enhanced security you will have a lot of grief. Naturally, once you get VS installed you are free to turn it back on, though it is probably not necessary.

Enjoy. Let me know if you run into any problems with these instructions so I can improve them.

Friday, November 2, 2012

Monkey Space 2012

SUMMARY: Monkey Space is the annual conference on cross-platform and open source .NET development. At $300, it was money well spent. Mono/.NET code is running on all major devices and represents a compelling platform for flexible, performant, future-proof development. Things to keep an eye on: Xamarin for mobile development, ServiceStack for web services, Type Providers for type-safe access to just about anything.

Below are my notes from the talks I attended...


Keynote by Miguel de Icaza

Miguel is the CTO of Xamarin, and the director of the Mono Project. A few points from his talk follow:

  • Casual gaming platform for Sony and Nintendo mobiles is Mono-based.
  • Unity also leverages Mono
  • Mono 3 is out and includes a fully-featured C# 5 compiler
  • Half Xamarin sales are iOS, half are Android
  • Forecast that in five years everyone will have mobile device

Direction

  • no longer to port Microsoft APIs to other platforms
  • rather exposing every native feature on every platform
  • ServiceStack is becoming industry darling for web services (Mono first)
  • aggressive optimizations
    • LLVM optimizer
    • will hardware accelerate some data types like matrix (OpenTK, XNA)
  • static analysis tools
  • working on profiling tools

ServiceStack by Demis Bellot

I heard that this was Demis' first talk on ServiceStack. If so, the only reason I might suspect that is that he had enough material to fill a day. It was my favorite session at the conference.

The slides are available on slideshare: http://www.slideshare.net/newmovie/what-istheservicestack-14819151

Demis described his past experience at the BBC, and how their zeal to do everything the right way lead to a slow, unwieldy architecture. He used this to motivate the discussion of ServiceStack's design choices such as a fast JSON parser, few dependencies, POCO DTO's (Plain Old CLR Object) (Data Transfer Object), etc.

What we did right at the BBC

  • pub/sub
  • message queues
  • DTOs

References

What is a Service

  • reusable capabilities available remotely
  • many clients
  • accessible, interoperable

Some Interesting Features

  • Supports Google protocol buffers
  • IAuthProvider for authentication
  • Develop on Windows but deploy to Linux so can scale at $0 license cost.

F# 3.0 by Don Syme

Don Syme presented some of the latest features of F# 3.0. Of particular interest were Type Providers. Type Providers are a tool for generating rich static types for data and functions that are normally accessed dynamically. He demonstrated accessing data from Freebase.

Dr. Syme is clearly an advocate of functional and multi-paradigm programming. My favorite quote from the talk was: "The dark days of object fundamentalism are past us" ... "we got lambdas in C++".

Here are some interesting links for diving deeper:


Moving to Mobile by Somya Jain

Somya described the pitfalls he ran into when transitioning to mobile development.

  • Memory
    • constrained. No swap.
    • varies by device
    • low memory warning
  • Memory bugs
    • Leaks
    • Overrelease
    • Long lived references not GC'd
    • How to detect?
      • Instruments (iOS)
        • Use zombies to detect objects used after freed
      • Android Memory Profiling
  • Bitmaps
    • Decoding, scaling, cropping happen on main thread
    • do some things on background thread
    • use in memory caching tools
      • NSCache (iOS) LRUCache (android)
  • Network
    • Slow, intermittent
    • 3G v WiFi
    • Use multiple network connections (e.g. Downloading multiple files e.g. 4 at a time)
    • use caching, offline mode
    • consider pre-loading data when know the user's steps
  • CPU
    • do performance intensive stuff on b/g
    • consider gpu

Tip: Always test performance on the device (worst one required)


Micro ORMs

Simple, lightweight, but no automatic relationships

Can use multiple on same project to leverage their strengths

http://johnnycode.com/MassivelyDapperSimpleData/

Dapper

https://github.com/SamSaffron/dapper-dot-net

  • Use POCOs or Dynamic
  • Intellisense with POCOs
  • caches reflection mapping

Massive

  • optimized for writing
  • uses ExpandoObject for reading
  • serializing an expandoObject to JSON does so as a dictionary; not quite what you expect
  • ActiveRecord-like hooks like AfterUpdate

Simple.Data

  • Syntax is easy to grok
  • Mocking support out of the box
    • using SimpleData.Mocking ... MockHelper.UseMockAdapter(new InMemoryAdapter())
  • Can get back a dynamic or strong-typed object
  • ActiveRecord-like FindBy

MonoGame

See tool called Project Linker for synchronizing projects and content compilation


Graffiti

Graffiti is "a cross-platform high-performance rendering engine" optimized for mobile hardware.

  • data-driven
  • retained mode
  • time-driven animations, independent of frame rate
  • takes advantage of C# syntax
  • uses XNA Reach profile for low power hardware

MonoDevelop Workflow

Monday, January 2, 2012

The Server / Clients Architecture

Modern software applications should be divided into client and server components. This is not a new concept. What's new is that you need more than one client.

The ubiquity of heterogeneous personal computers pluralizes the client.  I expect to be able to access Pandora from my work computer, home notebook, phone, tablet, video game console, car and blu-ray player.

As users demand this Cloud Experience, the implementation demands a Server / Clients Architecture.

The responsibility for storing the user's data belongs to the server.

The responsibility for providing a device-appropriate interface to the data belongs to each client.

Many application developers will resist.  On the surface it seems like more work for questionable gain.  But there is no stopping this trend.  And I believe it is in our best interest to embrace it.

First, we need not fear having to re-write each client from scratch.  We will be able to share code between them.  Identifying patterns is a hallmark of good architecture.

Second, the more clients you have, the more valuable your server becomes.  This plurality of clients opens doors to new opportunities: technical innovations, pricing models, market segments, service offerings, competitive advantage, ...

So, application architects, I encourage you to ask yourself how you can apply the Server / Clients Architecture pattern.  This is only the beginning.  We may enable our users to experience a synergy between their devices as yet unrealized.

Saturday, November 26, 2011

The Cloud Experience

We are entering the age of ubiquitous computing, and it is not a computer on every desktop.  It is a myriad of devices.  And the most common is more likely to be in your pocket than on your desk.  And we expect these devices to serve us fluidly.

What emerges is The Cloud Experience.  It is the idea that I can work effectively regardless of time, place or device.

We often think of this experience with email, or social networking like GMail and Facebook.  But, we will come to expect this experience of all of our data and software.

I expect that as I write this blog post, I can close my laptop. Then, go to the park, pull out my phone, and pick up editing where I left off.  Then, go to a friend's house, log on with his computer and publish the post.

There are two basic requirements:

  • The state of my work is saved "in the cloud"
  • An appropriate user interface is available for the device I am using

Next generation software developers should embrace this experience.

Saturday, August 13, 2011

For Speed and Certainty TDD Wins


As software grows, Test Driven Development beats Legacy Development in terms of speed and certainty.

With Legacy Development (manual testing)

1. I get some new requirements. 
2. I modify the application code. 
3. I step through the code in my head.
4. I run the application in a few scenarios.
Repeat steps 2 - 3 or 4 until everything works as expected
Repeat steps 1 - 4 until all requirements are met

With Testing Legacy Code (write tests after application code)

1. I get some new requirements. 
2. I modify the application code. 
3. I modify the test code.
4. I run the tests.
Repeat steps 2 - 4 until all tests pass
Repeat steps 1 - 4 until all requirements met

With Test Driven Development (write test(s) before application code)

1. I get a new requirement.
2. I encode the requirement as a test.
3. I modify the application code.
4. I run the tests.
Repeat steps 2 - 4 until all tests pass
Repeat steps 1 - 4 until all requirements encoded

Compare the three approaches in terms of speed and certainty.

Speed

All three approaches get slower over time.  The software grows and verifying all requirements takes more time.  

Doing this manually is clearly slowest. (Computers run code faster than I do.) But writing tests takes time too.  The question is which takes more time? Writing tests? Or manual testing?

The answer is that the cost of writing tests stays roughly constant.  However, the cost of manual testing goes up with the amount of code.

The question becomes: when does the time taken to manually verify code exceed the time necessary to write unit tests which can automatically verify the code?  After a week?  A month?  After the second developer comes on?  For applications of significant size, TDD wins.  

Certainty

The most uncertainty is with manual testing and the least is with test driven development.  

In manual testing (legacy development), the programmer elects to run some test scenarios mentally, and some manually. It is based on judgment calls, and is nebulous.

With unit testing of existing code, the parity between tests and implementation is difficult to verify. It is mentally taxing to determine which tests to write after the implementation is done. Is all code covered?  Even with 100% code coverage, not all requirements may be covered.

With test driven development there is less ambiguity.  A requirement is encoded as a test. Then, just enough implementation code is written to meet the requirement.  Parity.  Balance.  Certainty.  

Saturday, July 23, 2011

AppHarbor vs. Heroku

I am a simple caveman web app developer.  Your scalable racks frighten and confuse me.  But, there is one thing I do know.  I have code.  I have data.  And I must put that code and data online as quickly as possible.

Both AppHarbor and Heroku provide deployment using Git.  This is elegant.  I expect to see more services offering both continuous integration and live deployment via source control.

AppHarbor is for ASP.NET code.  Heroku is for Ruby on Rails.

For now, Heroku and Rails lead the way.  But, AppHarbor and .NET are catching up quickly.

The code.
Round 1.
Fight!

AppHarbor provides some continuous integration features.  It runs your unit tests prior to deployment.  If a test fails, the software does not get deployed.  It also provides a list of recent builds, which you may click to deploy.  Heroku does not provide these features.

What about debugging your live code?  Heroku wins this one.  Heroku automatically logs exceptions.  It provides a couple different logging options.  AppHarbor provides a page for "Errors", but throw as I might, I've never seen it say anything but "No errors to display."  They currently recommend you roll your own error logging.

Heroku also recently rolled out a feature to help with managing Staging and Production environments.  This is a highly requested feature on AppHarbor, so I expect we will see it soon there.

Finally, what about writing code?  This goes to AppHarbor (well, actually to Microsoft).  Visual Studio simply provides more assistance to the developer.  With Ruby on Rails you will be installing more tools and spending more time at the console.  And Code Completion?  Visual Studio beats the Ruby development environments.  (Can a dynamically typed language can ever offer as much code completion as a statically typed language?)

AppHarbor wins (but not without taking a few licks).

The data.
Round 2.
Fight!

One of the corner stones of Rails is "Convention over Configuration", and this is most evident with its ActiveRecord based ORM.

Using .NET on the other hand...  Well...  There are many options for data access.  And they certainly use conventions.  But, none is without some configuration.  And AppHarbor offers no clear recommendation.  This is a brick wall to beginner adopters.

And schema migrations?  Rails wins again.  Heroku handles migrations the same way they are handled locally: by calling a rake command.  The code and schema stay naturally in sync as the migrations are stored in text files alongside the source.

AppHarbor is very different.  Databases are created manually via the web interface.  Databases cannot be added or dropped at runtime from the application.

Furthermore, Microsoft's Entity Framework is not there yet either.  Using the EF Code First approach (which seems to be the path of least resistance) the only "migration" supported is dropping the database, and creating an entirely new one.  Clearly this nuclear option won't work with AppHarbor's policy, or if you want to keep your data.  (There is a work-around that only drops the tables.  I ended up creating my tables manually with a sql script exported from my local database.)

Another database bootstrapping problem that new MVC3 projects will run into is the absence of Application Services tables.  These services can be removed from a new MVC project, or you can populate the remote database as described in this thread.

Heroku wins.  Flawless victory.

Round 3?
The conclusion is that Heroku is faster overall, particularly for database deployment.  But, AppHarbor, Microsoft, and open-source .NET developers are closing the gap.

Saturday, July 16, 2011

Git on Google Code vs GitHub

Google Code now supports Git as a repository option.  Here are my first impressions of pushing a git repository to Google Code compared to GitHub:

Cons

  • Project name has to be globally unique and all lowercase.
    On GitHub my projects are under my user directory.  So, I don't have to compete for a unique project name.  And I can used mixed-case, which is normal for most languages.
  • No automatic readme.  GitHub has a convention where a readme file is automatically shown on your project home page.  And it is prettified if it uses markdown syntax.
  • No instructions for pushing your existing repo.

    After you create your project you are not presented with instructions for pushing your code via git.  In fact, you aren't presented with many instructions at all.  If you navigate to the source page you will see instructions for cloning your new repo, but no instructions for a pre-existing repo..

    As a side note, you can push an existing repo to your Google Code project.  Follow similar steps as you would on GitHub:
    • Go to the source tab to find your source URL and a link to a generated password
    • cd <existing git repo>
    • git remote add origin <url>
    • Paste your password when prompted
    • git push -u origin master


Pros

  • Pick your license.  There doesn't seem to be a conventional way to specify your open source license on GitHub.
  • Even though Google Code doesn't have a convention for using your readme file, you can use wiki syntax to update your project description.  It does not track changes.
  • There is better integration between the project wiki and the project home page.
  • Integration with other Google services like Analytics and +1
  • You can pick a project logo...
For the time being, I'll be sticking with GitHub.  But, I'm glad to see some competition.