TechEd 2013 Overview – Wednesday

April 21, 2013

On wednesday, the real meat of the conference started. There were competitions and promos, nerds everywhere and stacks and stacks of sessions.

Impressions

The Durban Convention Centre was swarming with nerds when we got there on Wednesday morning. Any time that a slightly interesting piece of swag appeared, the distributor was mobbed. The only decent coffee was available at two stands – one was a coffee company (Lavazzo) and the other was a the Gijima stand. Both were doing a roaring trade, with people queuing 20 deep (literally!) for coffee. I think another 2 or 5 coffee stands wouldn’t have gone amiss.

There were all sorts of competitions running, and one company (FlowGear) had a speaking area where they were promoting their product, and after every 1 hour session demo of their product, they drew a seat number and gave someone an iPad mini. Masterful marketing (lifted from the timeshare industry), they probably got their message to more ears than the rest of TechEd combined.

By the end of TechEd, the Lavazzo coffee makers had served over 4,000 cups of coffee. I hope they don’t have any repetitive strain injuries from making that much coffee!

Session 1 – Becoming Agile and Lean (Martin Cronjé)

This was a practical session where Martin shared some aspects of his methodology for lean / agile development, and stacks of examples of task boards and what they mean to the organisation that is running them. He had a few examples of agile gone bad too. Martin was an excellent speaker, and the examples he gave were engaging and interesting.

I was impressed by his pragmatic attitude and his contention that agile should be implemented as works for the organisation, not based on a textbook without consideration of the factors that matter to the organisation that is implementing it.

I don’t think that the industry is mature enough to use Agile throughout (especially when it comes to business to business engagements where one company is developing one half of the system and another company the other), but it’s interesting to keep an eye on and I think that the advantages of using Agile where it makes sense are real.

Session 2 – Introduction to open and flexible PaaS with Windows Azure (Beat Schwengler)

Beat presented some of the same strategic content that we had covered in the Cloud Strategy day on Tuesday, and then proceeded to give some demos of what Windows Azure can do for you. He demonstrated deploying cloud apps directly from Source Control (works for TFS and GitHub), and spent a bit of time demonstrating Microsoft’s Hadoop offering on Azure – called HDInsight. I was very interested by that last one, it definitely warrants some more attention.

He then demonstrated Windows Azure Mobile Services, which are a powerful mechanism for creating rich services for use with mobile apps (Windows Phone, Android, iOS or HTML5), with integrated push notifications for Windows Store apps (unfortunately not it seems for Android or iOS apps.)

The awesome thing about these mobile services is that you can run 10 of them for free on Azure (with the proviso that you don’t get a guaranteed uptime until you start paying and deploying more nodes)

Session 3 – ASP.NET MVC : Tips for improved maintainability (Peter Wilmot)

Peter seems to be in the same boat as me. He’s a back-end developer forced to participate in a world where most of the work is increasingly moving to the client. He put together this session about how to write MVC code that is maintainable. One highlight of the presenteation was when, coming at MVC from a code-centric perspective, he put together a brilliant slide depicting which parts of MVC development need what type of skills, and where more change will be vs where you want fewer changes.

He also had a few practical rules for how to structure your projects and which features of MVC to use to make things easier (use data annotations on your view models, etc)

This was one of the most valuable presentations that I attended in the Dev track, it was full of practical advice presented in an accessible manner. I will certainly be covering this content with the developers at my employer.

Session 4 – What did your last presentation die of? (Johan Klut, Blessing Sibanyoni, Jaco Benade, Robbi Laurenson, Rupert Nicolay)

Each participant told a story of where they had made a bad mistake in a presentation, and explained how they got there, and what they should have done (and since always do)  to prevent that particular problem. It was very engaging.

The presenters covered a technique for story telling called the CAST process, as explained on http://storiesthatmovemountains.com/

Session 5 – Panel Discussion: Modern developer pratices the theory and the reality

This session was a group discussion by a few speakers around how development should be done, how developers must take personal responsibility etc. I didn’t really take anything away from it.

The Flowgear Challenge

End of Wednesday. While sitting down with my colleagues for a quick chat before we went for supper, I saw a tweet from Flowgear on how they will give an iPad mini to the person that writes the best implementation of a session reminder system for TechEd sessions that contain a particular keyword. That sounded like something right up my alley, so I decided to write a Windows Azure implementation of this challenge. I spent the evening getting to grips with Azure and writing a web site where someone could request reminders, and a worker role that inspected the data that the web site wrote and then periodically sent the requested reminders. I’ll post the details of my implementation, as well as some things that I learnt about Azure in the process, in a follow-up post.

The above took me until Thursday morning 1AM. I’ll tell you more about Thursday in the next blog post.

TechEd 2013 Overview – Tuesday

April 21, 2013

My employer sent me to Tech Ed Africa in Durban, and I had a whale of a time. I thought it would be fun for me to chronicle the whole experience, with a few spin-off blog posts about topics I found particularly exciting, or rewarding (more on that later!)

Let’s go through day by day, starting with Tuesday.

Cloud Strategy Day

We got invited to the Microsoft Cloud Strategy day, hosted by Beat Schwengler. He is a director in the cloud strategy group of Microsoft, and other than having the coolest twitter handle ever (@cloudbeatsch), he had some very interesting content to share with regards to how Microsoft is viewing cloud in its Azure product, and what they are doing to drive adoption.

Some key takeaways from his session(for me) were:
MS is embracing Open Source technologies in the cloud.
Ubuntu and CentOS are both first class citizens of Azure (in terms of the IaaS offering), and Azure has a ver interesting Hadoop offering – more on that later.

Computing costs are coming way down.
A user can now host 10 web sites and 10 app APIs for free on Azure, provided that you are happy working without an uptime SLA until you start paying so you can have redundancy. For hackers putting apps together, that is a major benefit. You can try ides for apps and services, and if they don’t work, you just move on without having expended anything other than your time.

Business models and monetisation are critically important.
Beat stressed the importance of being clever about how you monetise your application. He had some interesting stats on the income generated through advertising vs the income generated through subscription services. I found the discussion very interesting. He had us do an exercise where every table in the conference room put together a business plan, which I found to be an interesting exercise.

Opening KeyNote

After dashing through and checking into our hotel, our little group of colleagues went to register for the conference at the convention centre in Durban, and attended the opening keynote.

They had two dancers performing a very gymnastic routine, which was interesting to watch if a bit out of tune with the rest of the proceedings.

The keynote consisted mainly of the MS guys working through a modern computing scenario (a business owner requesting an app feature, some developer adding that feature, and it being deployed by an IT pro, followed by the Business user reporting on the expenditure in Dynamics AX)

At the time I thought that the scenarios were a bit stilted, but on reflection afterwards it wasn’t so bad, they covered a lot of ground and opened a few avenues of interest for me when deciding which sessions to attend.

We opted to skip the opening party, and went to have a pizza at a nice Italian pizza joint we found at a previous TechEd.

Stay tuned for Wednesday’s post…

Visual Studio 2008 and 2010 networking differences for TFS

January 26, 2012

Someone at work inherited my old development machine and was setting up Visual Studio 2008 and 2010 for his identity.

They could add our TFS server from the VS2010 instance, but not from the 2008 visual studio – VS gave an error saying that the Team Foundation server does not exist.

I stepped in to help get it configured, and as a debugging step asked him to run Wireshark and catch the network traffic.

As soon as we couldn’t find the TFS Server’s IP address in the trace I could see what the issue was. VS is using the system proxy settings, and then asking the proxy server to connect with TFS. This fails on our network, because our proxy does not have visibility of our internal network for security reasons.

As soon as we put in a proxy bypass for local addresses, we could add the TFS Server and all was well.

I would imagine that VS2010 either has its own local address check and then bypasses the proxy, or bypasses the proxy on failure. Either way, it wasn’t bothered by the proxy configuration being incorrect.

Maximum length of .NET command line params

September 1, 2011

Ever wondered what the maximum length of a command line param to a .NET application is?

I am investigating simple integration options between processes and wanted to know the limitation of this method of passing info from one process to another.

I found some documentation on MSDN (http://msdn.microsoft.com/en-us/library/ms682425(VS.85).aspx) which states that the max commandline param could be 32,768 characters.

So then I put together this .NET app to test that:

Screenshot of resulting two applications running

You can specify the length of data that you want passed to the other process.

The code that launches the other process looks like this:

And then I modified the other application’s Program.cs file as follows:

Changes to the other process's program.cs file

The result of this test?

I can pass 32,673 characters to the other process. Which is 5 fewer characters than the API call allows. (4 if you subtract the null terminator on the string).

Does anyone know where the additional 4 characters have gone?

WCF configuration editor not showing in VS2010 Workflow Service Project

June 14, 2011

I am creating a Workflow service project for WF 4.0 in VS2010.

When I right-click on my web.config, I am not seeing the “Edit WCF Configuration” option:

WCF config editor not visible

 

I realised that a work around for this issue is to launch the editor from the tools menu:

This seems to wake VS up, and once that has been done the option will show:

WCF config editor available

MySQL – A first look

June 21, 2009

Why:

I recently needed to produce a system for a client that owns a trading company. Now this is a fairly small business, but the data requirements were such that the database would easily grow above 4GB within year or two. So SQLExpress was out, as it is limited to 4GB. And the company is too small to afford the high fees of buying a full SQL Server.

The tools:

So I spotted the opportunity to learn a bit, and downloaded and installed MySQL 5.1. First impressions were really positive. It installed withut hassle, and I found a tool called MySQL Workbench on the same site, which looked like a good management tool.  I installed that too, and the program seemed OK to work with, I could design my tables etc. But then when I wanted to load those tables into a database I ran into some annoying limitations in the free version of MySQL workbench. Basically it is very clunky unless you buy the full version. Which I was not going to do for one little project.

Then I looked around and found Toad for MySQL. I was astounded. What an amazing product! It is completely free, and delivers an excellent user experience. You can write and run queries from it (with code completion!),  you can design tables in a really friendly table editor which allows you to commit entities to the database ot to get the DDL to create them yourself, you can create table diagrams etc. I was suitably impressed.

So what is it like?

Initial impressions were quite favourable. There are a few concepts that you have to learn while designing tables, like the storage engines. Basically MySQL allows you to specify a storage engine per table. The InnoDB engine is transactional and allows foreign key relationships etc. The MyISAM engine is non-transactional, does not allow keys but it is super quick. Useful for logging or, in my case, useful for storing huge numbers of stock price records. There is also an in-memory table which works like a SQL Server temporary table, except that the in-memory tables are still defined after a reboot, without any data content, while a temp table is gone after a restart.

It’s query language is full featured, and a little foray of mine into date processing left me with a very favourable experience – there is lots of support online, and the language is as powerful as TSQL, as relates to dates at least. It was stable and fast, and I would be quite happy putting any of my systems into production using MySQL. Of course I am stating the obvious here – many huge systems (including Google) run on MySQL, but it is still quite nice to have a look for yourself.

And what didn’t I like:

It does not seem to have Windows Authentication support, which is a bit annoying. The Entity Framework did not work with an out the box client install, although I did read that it works, there is just some additional installation necessary. It now does support stored procedures, but still not all the richness we have in SQL server like DDL triggers, user defined functions etc.

And the conclusion?

For systems not hugely database bound but rather with an intelligent middle tier and data layer, I would be very happy to use MySQL. I believe it would be as productive as using SQL Server, and it doesn’t cost a cent! What a pleasure.

What can we learn from design patterns?

April 8, 2009

I saw this article, where the author says that (and I’m paraphrasing here) he thinks that design patterns stifle creativity, and should not be used.

Which led me to the question: “In addition to learning design patterns by rote, and implementing them from memory in some twisted version of cut and paste, what can we learn from design patterns?”

Although I believe that there are many problems that should not be solved by forcing the problem domain into a pre-rolled solution, I also believe that design patterns do more good than harm.

I have always thought “What can I learn from this design pattern?” rather than “How can I force this piece of code to conform to this design pattern?”  I personally find that the way of thinking that I learnt from studying design patterns has helped me enormously to write code that follows the accepted best practices of object orientation,long before I even knew that such things existed. The practices that I am referring to are things like the open / closed principle, the law of Demeter, the Liskov substitution principle, the single responsibility principle etc.

My contention is that by studying design patterns, you learn to solve problems in ways that automatically conform to these and other best practices, and as such will be easier to create, maintain, unit test etc.

And, for many common programming problems, design patterns already represent an industry standard method of solving them. Things like singletons, factories, command and strategies are software components that everyone should be using to solve the issues that they are meant to solve.

So, by all means, don’t fall into the trap of blindly obeying someone else’s solution because it is some sort of best practice. But, equally and perhaps even more importantly, don’t discount the benefits to be had from the open-minded study and usage of design patterns.

Communicating between WCF services and J2EE

March 20, 2009

We started a new project this week at my day job, where we are creating a common set of services using WCF in .NET 3.5 SP1, and in addition to the .NET applications making use of them, we wanted some Java systems to communicate with them. The Java systems are an ESB running in Apache ServiceMix, and some other as yet undefined Java systems running on Solaris boxes.

We chose WCF for the ease of creating new web services, where it does all the heavy lifting for you and allows you to swap protocols and addresses transparently. We chose .NET 3.5 SP1 both because it is new and shiny, and because it offers the entity framework, and we like the entity framework.

But now we were faced with the question: How do we make the java code call the services? And indeed, since we are a bunch of java noobs, how do you call web services in java? We called on the great google, and it showed that apache axis seemed to be a popular tool to both publish and call web services with. So, deciding that who were we to argue,axis it was.

I created a default web service, ran it from the IDE, and pointed Axis at it to generate a stub from the WSDL with the WSDL2Java program included in the Axis distribution. The location of the WSDL is available from the test page that Visual Studio brings up when you run your service.

Oh-oh, no luck. The axis client complained of a problem with its understanding. I’ll reproduce it and post the exact error message here. When I googled that error message, I found a whole bunch of interesting pages, but nothing that helped me understand what the problem was.

Then I thought about it for a while and remembered that the WCF web services were supposed to ship with all the WS standards enabled, and reasoned that maybe Axis did not like these newfangled things, and wanted a simpler SOAP interface.

After a dead end or two, I realised that you can change the port mapping in the WCF config file to use a basicHTTPMapping, and that this then publishes the web service as an old style SOAP service, which Axis was happy to generate clients for.

I tested the clients, and hey presto! We could invoke the WCF web services from the Java code.

But, never being content to leave well enough alone, I decided that I wanted to remove the WCF webservices from the tempuri.org namespace, and put them in a more appropriate one. I added namespace information to the service by modifying the service  interface, the service implementation and the binding namespace in the web.config file.

These are the things I changed:

Then I attempted to regenerate the clients with the axis tool, and it complained. The error message was about invalid WSDL structure. After thinking on this for a bit, and comparing the original tempuri.org namespaces in the WSDL to my new ones, I realised that axis was expecting certain relationships between the namespaces on the three attributes, and then I realised that the namespace in each of the three places mentioned above must be EXACTLY the same.

I changed them all to be identical, and there it was. The client stubs generated correctly, and they were able to successfully invoke the services.

As our good friend Borat says so wonderfully “great success!”

I have arrived!

March 20, 2009

Ok, so I have finally gotten so far as to start a blog. Hopefully we can have some interesting and illuminating discussions on all things related to programming.

I decided that I would start one so that I can post things as I learn them (I have some interesting ones stored up), so let’s see where this takes us, hmmm? And besides, all the cool kids are doing it!


Follow

Get every new post delivered to your Inbox.