Tuesday, May 1, 2012

Windows 8 Metro Application Development Posts

Why Does It Matter?

Windows 8 Release Preview (formally known as a Release Candidate (RC)) is going to be available early June and you can read about the announcement here.  That means it's one step closer to market, and the necessity of available Metro applications is becoming more of a reality.

Windows 8 brings the Metro design paradigm into full effect.  It's the new next door neighbor of your traditional desktop application.  Are desktop applications going away in Windows 8?  No.  Will they ever go away?  Unlikely.  This is a whole topic in itself, but in brief: Metro adds a new dimension of how users and applications interact.  There will be Metro style applications and there will be desktop applications.  Some desktop applications cannot be Metro style.  My posts will focus on providing the knowledge to build Metro style applications.

Windows is still the largest software ecosystem.  Fan of it or not, in some way, shape, or form, we'll need to embrace it, especially those of us that work on the Microsoft development stack.  Even if you won't be directly affected by it, does it hurt to stay ahead of the curve and learn new skills now rather than play catch up?

Again, my intent is not to convert anyone to Windows 8, Metro application development, or onto the Microsoft development stack.  My intent is to provide those interested or working in Windows 8 and Metro application development with additional knowledge.  Hopefully, if I've done a decent job at communicating my experience, you'll give it a shot, but at the end of the day I'll let the products speak for themselves.

How Will I Help?

The topics that I am choosing to write about, in my opinion, are features or knowledge that will/should be consistent across all - yes I said all - Windows 8 Metro applications, or will be used more often than not in a Windows 8 Metro application.  Additionally, I encourage feedback on what others feel is important and any other areas I can cover.

Some things will be very easy and others will be more complex.  Again, hopefully I do a decent job in communicating.

Again, these posts will not be any particular order, but I will try my best to sequence in the most appropriate order.  As each post is published I will revisit this post and update the topic title to embed the link, and if a topic is broken into multiple posts I will link them as sub items.  Lastly, I am treating this list as alive and will be modified to include or remove topics.

Also, most of my examples will be focused on the JavaScript/HTML5 project templates.  I'll only use the C#/XAML projects when appropriate, I change it up, or anyone specifically asks, but don't push your luck with C++.

Hopefully too, if any of my friends are feeling generous they'll do some celebrity guest appearances on topics.  You'll get to experience a different perspective and expand on topics where I may not be as deep in.  It's a win-win.

Onto the meat and potatoes...

Topics

  1. What is Metro & Its User Experience Implications - Post(s) will provide an introduction to the Metro Design language and how it effects user experience.
  2. WinRT - Post(s) will provide an introduction to WinRT and how development is streamlined.
  3. Windows 8 Contracts: Leveraging Search, Sharing, and Settings - Post(s) will demonstrate how quickly Search, Sharing, and Settings can be integrated into your application and provide a seamless user experience in Windows 8.
  4. Building an Application Bar - Posts(s) will explain the purpose of an Application Bar and how to implement one into your application.
  5. Roaming - Post(s) will detail how to maximize the user experience of your application by integrating with mobile and traditional web browsing.
  6. Leveraging Geolocation & Bing Maps API - Post(s) will demonstrate using Geolocation services to interact with Bing Maps.
  7. Live Tiles, Badges, and Toast Notifications - Post(s) will cover how to incorporate Live Tiles, Badges, and Toast Notifications to enrich the user experience and make your application interactive.

What Can You Do?

In the meantime, if you haven't already check out the official development site for Windows 8, dev.windows.com, and download the Windows 8 Consumer Preview and development tools.  Test drive the new OS, read up on the Metro principals, and enjoy!

Lastly, as I've mentioned before, if there are any topics that you're interested in please let me know and I will do my best to address them.  And do not hesitate to reach out with any questions.  It would be beneficial for everyone if the comments section is utilized.  In this format questions and answers are open to everyone.

Cheers!

Monday, April 30, 2012

Blog Revival!

Guess Who's Back?

Catching Up...


It's been a while - about two years - since I've posted anything to my blog; a damn shame if you ask me.

A lot has changed in the couple years: I've gotten married, changed jobs, acquired another dog, gained/lost weight, lost more self-shame/dignity, shave with a straight razor - just to name a few.

Why did I go quiet?  Many reasons that can tie back into time.  No excuse, though, there are many more bloggers out there that have the same or less free time that I have and still maintain amazing blogs.  Failing to plan is another.  I never planned out the posts I wanted to write about, and I lost motivation to make the time.

So you're probably wondering, why the change of heart?  I like to share my knowledge and help others; and two, I'm selfish.  Blogging is another medium for documenting.  It provides me another means of referral when I forget something.  Sharing knowledge is crucial in the IT world.  It's constantly evolving and to survive we're pushed into continuing education.  We're one big community and have a lot to learn from each other.

About a year and a half ago I was fortunate enough to have a kind friend refer to me to Microsoft.  After a couple months of waiting, I was given the opportunity to go thru the interview process.  The interview process was successful and I began my consultant career at Microsoft a little over a year ago.

I digress.  Onto the interesting stuff.

 

'Ol Fashioned Revival!

Blog v2.0


Put on your dancin' shoes, bring your singin' voice, and be ready for some fun filled evenings.

My goal - with better planning this time around - is to post a topic once or twice a week.  This blog will still be development focused infused with my humor and sarcasm and a side of pontificating from time to time.

What you can look forward to over the next couple weeks are some intro development posts on Windows 8.  No, I will not be writing dedicated posts on this is Windows 8, blah blah blah.  One, there's plenty of those out there already and I wouldn't do a complete justice.  Two, I really don't feel like getting into flame wars with iOS and Android users in the comments section.  There's a million ways to skin a cat: there's no right or wrong way, some ways are better than others.

The posts will be like my chain of thought: no particular order and random.  For purposes of brevity and preventing information overload I will probably dissect certain posts into smaller ones that build on each other.  Otherwise, there will be no particular order of posts.  Some will be intro topics and others will be advanced.

I'll be posting my preliminary list of topics for Windows 8 development soon.  Not being specific to my Windows 8 posts, but my posts in general: if there are any topics you're interested in and that I might have missed or you'd like me to cover in greater details, please let me know and I will do my best to accommodate.

Most importantly, thank you!

Cheers!

DISCLAIMER: I am the author of this blog and am ultimately responsible for its content.  I am an employee of Microsoft, but the content in this blog is that of my own opinion and creation, and does not reflect the position of Microsoft services and/or products.  Specific to Microsoft services and/or products only publically available information will be shared.  With that said, I welcome questions and will answer as appropriate, but please do not solicit for confidential information.  Thanks and enjoy!

Tuesday, May 18, 2010

Prevent @@IDENTITY Theft

I've been working on a database design that will be importing multiple feeds and exporting data thru SSIS, and have created various audit fields in my tables to link a record to a specific input file.  The overall SSIS solution I've chosen is a topic for another post, but while figuring this out, I ran into a common operation that a developer runs into: returning the identity value of the last inserted record.


Why am I writing about this?  For starters, when I began thinking about this, I wanted to refresh my memory on what my options were, and share my findings and thoughts.  Secondly, it is imperative that the most applicable method is chosen.  If not, the risk of data compromise increases greatly and I wanted to describe my situation and brief "catch-all" situation we might encounter in our day-to-day design and development.

Meat & Potatoes

I already know from my requirements what the frequency of the reports would be and when they would be delivered to us.  Therefore, I know I can schedule each import task to run and not compete with any other import; thus, guaranteeing that my last inserted id in the audit table will be unique to that feed.

However, what happens if you cannot guarantee that each insert will be unique to that specific operation - such as a web application with concurrent users or a service that listens for input files coming in at various times or concurrently?

So which method do I choose or which method should you choose?  Let's take a look...

For those that are familiar there are three different ways to achieve the last inserted id for an identity column in SQL Server:
  1. @@IDENTITY
  2. SCOPE_IDENTITY()
  3. IDENT_CURRENT('table_name')
The links will take you directly to the SQL Server 2008 Books Online reference in MSDN.  I will not cover deep dive details, just a high level understanding with small examples.

@@IDENTITY

"@@IDENTITY returns the last identity value generated for any table in the current session, across all scopes."

Bad news bears.  This pretty much says, give me the last identity that was created by anyone and anywhere.  I know that I don't need the last identity generated.  In my case that will return me the id of the last record I inserted into my data table and not my audit table.  In a concurrent user environment this could return you the id from a unrelated table that had a record added by a completely separate user.

To me, this is the least desirable method and runs a much higher risk of compromising data leading to my trademark, "Things That Go FAIL."

SCOPE_IDENTITY()

"SCOPE_IDENTITY returns the last identity value generated for any table in the current session and the current scope."

Ok.  This doesn't help me, but may help you.  We can now say that I will be returned the last id value for that specific process.  Again, in my case this still doesn't work.  I will still get the last id from the data table and not the audit table.

In a concurrent application, this will at least provide you with the last id from the current process.  However, what if you are doing multiple inserts in one stored procedure?  You would have to consider your order of operations to ensure that you are returning the id value you need.  This may not be possible because of foreign key relationships and how the inserts will cascade.  Or, following the operation you need the id value you can set the output param of the stored procedure to this.  Better, but not great.

IDENT_CURRENT('table_name')

"IDENT_CURRENT returns the last identity value generated for a specific table in any session and any scope."

I like the sound of this.  Soup to nuts: give me the last id value for the table I specify that was created by anyone.  Pretty much a SELECT MAX(id_field) of the specified table.

I've finally reached the potato salad in the buffet line.  Given my requirements and design I can now update my inserted record with the appropriate id from my audit table.

How does this help in concurrent applications?  It really doesn't.  You still assume the risk that another user has updated the table after you.  I would stick with a SCOPE_IDENTITY solution while being aware of order of operations and/or assignment of the output param following the given insert operation.

Or we can go ahead a use a GUIDs to ensure that we have an impenetrable unique id  for every insert operation, and pass those around using parameters.

Things That Go FAIL

All three of these methods can go FAIL.  Understanding how each method operates and what you're data means and is supposed to do will lead to a well informed decision.
  1. @@IDENTITY - One simply cannot guarantee that the last id returned is from my specific operation on table it was performed on.
  2. SCOPE_IDENTITY - If not used properly.
  3. IDENT_CURRENT('table_name') - If not used properly.
Mongo Sez
  1. Read up on the three techniques in SQL Server Books Online to gain an understanding of what each one does.
  2. I did not cover the implications of these methods in a transactional or replicated environment.  There are other considerations of using identity values in this circumstance.
  3. Understanding your data and how it's being used is imperative - as in any database design.  A full understanding will prepare you to make a well informed decision on your approach.  Proper design not only helps your data become information, but can greatly impact those down the line from you: other database developers working with your design, application developers consuming your database, and your end users.
  4. I <3 potato salad.
Questions, comments, am I completely out-of-my-mind?  Let me know.

Tuesday, May 11, 2010

Three's Company: Subsonic 3.0.0.4, Visual Studio 2010, and SQL Azure

Come and Knock On Our Door...

Subsonic 2.x and Visual Studio have been living together quite happily for a while now in our shop, but decided it was time to upgrade to Subsonic 3.0.0.4. We figured now that we've moved to VS 2010 it was appropriate to move on up to the latest release of Subsonic.

Also, what better an opportunity to introduce our very own Jack Tripper (SQL Azure) to these bubbly companions?

So, I set out to accomplish a few things:

  1. Ensure that Subsonic 3.0.0.4 does not cat fight with VS 2010.
  2. Create and deploy a MVC2 web role Cloud solution.
  3. Create a data layer in said MVC2 project using Subsonic generated objects from the ground.
  4. Create a data layer in said MVC2 project using Subsonic generated objects from the cloud.
  5. Ensure that Subsonic ground and cloud generated objects play nice in the cloud.
  6. Write as dirty code as possible to prove this.
Let's take a look...

Meat & Potatoes

Subsonic

If you're not familiar with Subsonic, learn about it and download it here.

Sample Database

For starters, I went and created a sample database in SQL Server 2008. Real simple, doesn't need to be anything complicated; one table with a few columns of various data types, inserted some data, and wrote two basic stored procedures: get data and insert/update data. This is adequate for our exercise.

Also, remember SQL Azure requires a clustered index on the table or you'll be noting something under, "Things That Go FAIL." So, to be safe, I explicitly create my clustered index on the primary key of my table, as seen below. I do this because I'll also be using this SQL to create the stuff in Azure.



Cloud Service Solution

Next, I create my Cloud Service solution and add the MVC2 web role project. If you haven't done so already, follow the steps in VS to download and install the Azure tools. NOTE: If you're in Win7 run VS as an administrator.



Compile and run it to make sure everything is setup correctly. If you encounter any issues regarding the development storage database make sure that you have run the DSINIT command :

  1. Open the Windows Azure SDK Command Prompt (run as admin on Win7)
  2. If running SQL Server 2008 or 2005 Express type: DSINIT /sqlinstance:.\MyInstanceName
  3. If running SQL Server 2008 or 2005 non-express editions type: DSINIT /sqlinstance:MyInstanceName - NOTE: You do not need to specify the ".\" when configuring for a non-express edition.
  4. Connect to your SQL Server instance and verify that the DevelopmentStorageDb... database was created.

If the command fails, refer to my previous post and this post for potential fixes.

Go ahead and add a new directory to the MVC2 web role project and name it Lib. Add the SubSonic.Core.dll to it. I like creating this directory for third party binaries and adding my references from here. This is useful as the source gets passed around the dev team - there's a central location in each project for the assemblies. Lessens the risk that a fellow developer doesn't have the appropriate binaries locally or installed in other locations.

Once you know you're good to go with the out-of-box stuff we can go ahead and create our data layer projects.

Data Layer

First, create a new C# class library project within your solution named: GroundDataLayer, and add a reference to the SubSonic.Core.dll from the previously created Lib folder.

This is the project that will contain the objects that Subsonic creates for you from your ground SQL instance.

Lastly, before we configure and load the Subsonic T4 templates, add an App.config to the project, add the connectionStrings config section, and add a connection string for your ground instance.

Next, update the Settings.ttinclude file following the README instructions in the T4 Templates directory. Notepad will suffice for this. I used the ActiveRecord template, but experiment and let me know how the Linq templates work.



After you've updated the ttinclude it's time to add the template files to your project. Right-click on the GroundDataLayer project and choose Add | Existing Item and navigate to the folder of the template you chose. Select all the items in the folder and add them to the project.

If you get prompted about the T4 templates plotting terroristic actions to your computer go ahead and allow them. On that, of course, always be aware of T4 templates you download in the future to use. I digress...

This will now auto-magically generate the necessary code for you to access your database.



What's even cooler: say we add another stored procedure to our database. Right-clicking on the StoredProcedures.tt in the project and choosing Run Custom Tool will auto-magically regenerate the class to include your changes! NEAT! Sorry, I get excited when I can work smarter.

Hack Some Code

Open the web.config for your MVC2 role project. Copy the connection string from your GroundDataLayer App.config and place it in the auto-magically provided ConnectionStrings config section.

Now, let's go get some data. You're at liberty to do things the right or way or hack the Home view of the project as I did. I have no shame remaining so I'll share what I did:

  1. Utilized the ViewData["Message"] to inject the data from our sample database into.
  2. Directly instantiate the Subsonic stored procedures and execute a reader in the Home controller.


Way to use the MVC pattern, right? I'm just trying to prove this stuff works, not be pretty.

Once you've got this coded, go ahead compile and fix any errors, if any, and run the beast. If everything is coming up aces you should see your sample data in the application.

Head in the Cloud

Remember the SQL we wrote before for the ground? We're now going to put this to use in SQL Azure.

To connect to your SQL Azure instance thru Management Studio open a new query and in the window right-click, choose Connection | Change Connection. This will display the connection manager dialog. Put in your server instance, credentials, go to options and type in the database you wish to use. NOTE: Ensure that you've previously added your IP address to the firewall for SQL Azure. Also, you do not need to specify userid@server when connecting to SQL Azure thru Change Connection. You will need to do this in your App.config and web.config files.





Hopefully you're still coming up aces and you're now connected to your database in the cloud. If so, go ahead and run the SQL from before. If not, fix any issues and try again.

Dancing in the Cloud

Repeat the same steps above for creating another C# class library project, but naming it CloudDataLayer and in the App.config specify that the connection string points to your cloud.

Modify the T4 template's namespace you chose to avoid ambiguous reference issues with the GroundDataLayer project and add the files to the CloudDataLayer project. Again, allow the templates to run and generate the code. Also, do not worry about the change to the namespace as it will not conflict with the ones in the GroundDataLayer project.

Duplicate your code to display the data from your GroundDataLayer project being sure to reference the CloudDataLayer. Lastly, before compiling and running modify the web.config to point to your cloud database instance.

Running the application you should now see your data being pulled from your cloud database instance using the objects generated from your ground and cloud instances.

Are we done yet? I think not...


While this proves that yes: Subsonic 3.0.0.4 works nifty in VS 2010 and you can run your application locally and pull data from the cloud using objects created in either location; it doesn't prove that there will be no issues running the application in the cloud - which really matters.

NOTE: In the event of any run time errors you can comment out the [HandleError] directive in the View and turn off friendly remote errors. This is especially beneficial if something goes FAIL in the cloud.

Blast Off!

I'm going to assume you're familiar with configuring a Windows Azure service and deploying your application to the cloud, so I won't bore you with the details. If not, here's the nutshell:

  1. Publish your cloud service project by right-clicking on the project and selecting Publish. This will create two files: 1.) Application package, which contains the application code; and 2.) Configuration file, respectively.
  2. It will auto-magically open up the Windows Azure portal, and a Windows Explorer dialog to the Publish directory where those two files are.
  3. In the portal upload the two files and provide a label for this deployment. The label is arbitrary and can be anything.

While Windows Azure is publishing the application and starting the service, this would be a good time to ensure that Windows Azure is allowed to access your SQL Azure instance. Yes, you need to add a firewall exception for Windows Azure, but don't worry I'll throw you a bone and give you the SQL. If Windows Azure isn't added you will see a message like this:



Go ahead and reconnect to your SQL Azure instance following the steps above, but this time specifying the master database instead of the sample you created. Run this query:

select * from sys.firewall_rules

If any record returned includes the IP range 0.0.0.0 - 0.0.0.0 then Windows Azure has been added. If not run this query:

exec sp_set_firewall_rule N'Allow Windows Azure', '0.0.0.0', '0.0.0.0'

Run the select query again and ensure it's included.

By now, you're application should have published and the service is started. If the service is suspended start it up or if it's still propagating go ahead and smoke 'em if you got 'em.

Comin' In Hot

Navigate to your application instance in the cloud. If you're still aces you should see your data from the SQL Azure database instance using the ground and cloud generated Subsonic objects. If not, and you've turned off remote, friendly errors, use the details to resolve the issues. If you didn't turn them off, now's the time to do it and update your application in the cloud.

Here's where I leave someone to experiment: I didn't test the insert/update stored procedure in my test app. I left it to my own device and assumption that it would work. In the meantime before I test it, I'd love to hear of anyone doing it.

Things That Go FAIL

  1. Forgetting to add a clustered index to your table.
  2. Not having the Cloud Service tools installed.
  3. Not running Azure SDK Command Prompt and VS2010 as admin in Win7.
  4. Not creating your DevelopmentStorage... database.
  5. Not updating the T4 template Settings.ttinclude file per the README.
  6. Not allowing the T4 templates to generate your code.
  7. Not referencing the SubSonic.Core.dll in your data layer projects.
  8. Attempting to directly connect to SQL Azure thru Connect To | Database Engine.
  9. SQL Azure firewall rule for your IP.
  10. SQL Azure firewall for Windows Azure IP.
  11. Hacking up MVC to prove this.
Mongo Sez

  1. To be safe, explicitly add your clustered index on the primary key of your table(s) in SQL Azure.
  2. Run everything as admin in Win7.
  3. Follow the instructions in VS 2010 for installing the Cloud Service tools.
  4. Use DSINIT to create your DevelopmentStorage... database.
  5. Create a Lib directory for third-party binaries and add reference from there - it helps everyone in the end.
  6. Modify the Settings.ttinclude file per the README instructions for the T4 template.
  7. Be smart when using third-party T4 templates for security purposes.
  8. Using a new query and changing the connection to connect to your SQL Azure database.
  9. Firewall rules for your computer's IP and Windows Azure. Here's the how to from MSDN.
  10. Don't follow my code examples when I'm doing a quick POC.
Three's company too!

Friday, May 7, 2010

Windows Azure Tools - Development Storage Init Failure - RESOLVED!!!11111!



Failed to create database 'DevelopmentStorageDb20090919' : Instance failure.

So, I have resorted to posting here and appreciate anyone's comments and suggestions on how to resolve the issue in the screen capture above. I was able to work thru one issue by enabling the Named Pipes and TCP/IP in SQL Server Network Configuration.

Also, I am having no issue connecting using windows auth or the sa account thru sqlcmd. Secondly, I am running the Azure SDK Command Prompt as an administrator.

Little background: I did not install SQL 2008 Express at the time of installing Visual Studio 2010. I installed a full version of SQL 2008 and SP1 after installing VS2010.

Thanks in advance!

UPDATE - Issue is Resolved.

Things That Go Fail

  1. Me, Mongo.End of story.

I had epic failure and some code emo. It turns out that it was in my DSINIT command. According to, About Development Storage, it is not necessary to supply the ".\" in your instance name. Simply: dsinit /sqlinstance:SQL2008 or whatever your instance name will suffice.


This is proper syntax when specifying a SQL Server 2008 instance and NOT a SQL Server 2008 Express instance.

Thanks to the SQLAzure team and Steve Marx for guidance!

Thursday, May 6, 2010

Act 1: ASP.NET MVC2 and Strongly Typed HTML Helpers and EditorExtensions - OH MY!

Scene 1 - Intro

This is part one of a multiple post series that will dive into some of the new features of ASP.NET MVC2 and Visual Studio 2010.

So, Visual Studio 2010 and MVC2 have been a couple for a month. I hope everyone has gotten their hands a little wet or better yet, completely drenched and wrinkly, but not to the point of hypothermia, of course.

I'll admit that I have grown an obsession to the Strongly Typed HTML Helpers, as described here by Scott Guthrie, and the scaffolding capabilities within MVC2. This stuff is pretty slick and follows the pattern that, I believe, Microsoft has laid out with ASP.NET and Visual Studio for rapid development.

What am I getting at?

Meat & Potatoes

Scene 2 - Mongo Meets EditorExtensions

Being the inquisitive developers we are we decided to take a look at the EditorExtensions in System.Web.Mvc.Html and what they can provide out-of-box. We've done a lot of work in MVC1 with strongly typed models, custom views, routing, and jQuery to build a pretty neat platform. Our thoughts really boiled down to: what are the instant take aways and how can we apply the new tools in our future work? Thus begun the retrieving of hard hats and shovels.

If you're not familiar, MVC2 provides the option of creating strongly typed views. When creating a new view the ability is there to now associate the view to a model and specify the CRUD behavior in the IDE.

This is where I broke ground.

Putting It Together

Scene 3 - Mongo's Turbulent Relationship

I began by creating a model, Department, and added a couple properties and specified some metadata for the properties.



I personally like to code a little and then move onto the next step; which in this case would be creating my view. I selected create strongly type view and decided that my CRUD behavior would be Edit - select from the View Content dropdown.



So, with a little help from my friend MVC2 and VS, I now have a view that provides me with text boxes using the TextBoxFor and the LabelFor extensions. Needing instant gratification I built the solution and ran it to see my Department model in its editable glory.



Knowing that I didn't foo'bar anything I went back and decided to add some more properties to the Department model. Being satisfied with the new properties I wanted to update my view to reflect these changes. I right-clicked on my view and much to my surprise there was no "refresh my view," or the like option. I just hit the buried utility line (remember, call before you dig) and stopped all work to assess the situation and whine. Beggers can't be choosers; I got over my disappointment I won't be able to do the bare minimum to get this to work, so I dug a little deeper.



Scene 4 - Mongo Work Smarter, Not Harder

Knowing I was looking for an edit option and the awesome intellisense in VS2010 I narrowed my options to the EditorFor... extensions and struck oil: EditorForModel(). I added this in under what was provided when creating the strongly typed view for comparison purposes.



Building and running the application I can now see all the properties as editable fields using EditorForModel. Pretty f'in cool, one method call and I get the labels and text boxes for my model.



This got the steam engine in my head fired up and wondering, "This is great but what happens if I have read-only properties in my model?" or, "What if I want to have multiple edit views for my model?" Now my hole is filling with water, and I really hate when that happens because I never have the appropriate footwear.

Crap, I now have to revert and actually do work by manually adding in all those labels and text boxes to the view to exclude the properties I do not wish to edit, or exclude these properties from my model. Being stubborn and for the sake of the argument I don't have the option to remove any properties from the model.

Making sure I'm not reading the blueprints upside down I referred to the EditorExtensions reference to see if I can query my model to display the fields for the properties I specify.

No dice.

Scene 5 - Mongo Has Salt Poured on His Wounds

All the EditorForModel descriptions begin, "Returns an HTML input element for each property in the model..." NOOOOOOOOO, not for each property!

After stomping my feet and whining some more, I went and got the heavy artillery: Sean Iannuzzi. He's our architect and figured he'd throw me a bone, tell me I missed something, pat me on the head, and send me on my way. False. After explaining the situation and him working thru the disappointment, it was time to take action...

Intermission. Fin Act 1.

Mongo Sez (Summary)

  1. Freshin' up a little on MVC2 Strongly Typed Html Helpers with Scott Guthrie.
  2. Can has out-of-box strongly typed views.
  3. EditorExtensions - can't live with 'em, can't live without 'em.
  4. I am roller coaster of emotions when coding, aka: code emo.
I will be posting Act 2: ASP.NET MVC2 and Constraints on Type Parameters and Crazy Code - OH MY! in the coming days.

If I've depicted anything incorrectly or am making any blatant oversights - which happens when I'm getting code emo - please feel free to leave comments. Also feel free to leave any comments/questions regarding Html Helpers or the Editor Extensions. I'll write back and update as I can.

Thursday, April 29, 2010

SQL Azure Architecture Overview

Continuing the theme of Windows Azure posts let's take a look the SQL Azure architecture.

Meat & Potatoes

There's four layers that comprises SQL Azure:

  1. Client Layer
  2. Services Layer
  3. Platform Layer
  4. Infrastructure Layer
Client Layer

The client layer is closest to your application. This can be on the ground or in the cloud. Here you still have access to your favorite tools and libraries you use when building applications.

Services Layer

The services layer is the parkway to get from the client layer to the next layer, the platform layer. It serves these purposes: provisioning, billing/metering, and connection routing.

The services layer handles the provisioning of the database instances that you specify thru your Windows Azure account. Billing and metering is the money mechanism and determines what your usage and billing amount will be for the period. Connection routing is what keeps your application connected to everything as you scale, etc.

Platform Layer

The platform layer is the physical layer of the servers and services that support the aforementioned services layer. This is all managed thru the SQL Azure fabric. Think of the fabric as the components that make up your ground infrastructure (failover, load balancing, etc.).

Infrastructure Layer

The infrastructure layer is all the fun of administering the physical hardware and operating systems of the services layer.

Mongo Sez (Summary)

  1. Four layers of SQL Azure: Client, Services, Platform, and Infrastructure.
  2. Client: Closest to your application and your everyday tools and libraries.
  3. Services: Provisioning, Billing/Metering, and Connection Routing.
  4. Platform: Physical layer of servers and services managed thru SQL Azure fabric.
  5. Infrastructure: Administration of physical hardware and operating systems.
  6. Image source and detailed overview of SQL Azure architecture: MSDN Reference
  7. Also check the SQL Azure General Guidelines and Limitations on MSDN.