Requirements for a real-time cloud marketplace

There’s been lots of press (AWS’s SPOT Instances, Zimory, here’s one from Vinton Cerf) the last few weeks about cloud moving towards becoming an interconnected set of compute/data processing infrastructure. One step towards that is certainly some level of interoperability to be able to deploy apps — e.g. what Rightscale does essentially today or libs like libcloud that provide a consistent interface to many clouds/providers.

I sat down about 6 months back with Lou Springer to discuss the idea of cloud marketplace and we came up with a few items that would need to be addressed:

–workload transportability — and its beyond encapsulation in images but a description language of the relationships to data and “data physics”

–network transportability — our DNS-based way of doing things on the net has been pretty broken for a while. One way to do this is almost provide an escrow service that handles service delivery addressing. Payload size is also an issue (see above)

–workload rating/pricing — all apps have a time to live — how do I price my workload? what access does it need? what are the constraints to this model?

–capacity management — how do I know what capacity I have in order to provide a price?

–run-time permissions/evocation — at some point I will want to ensure that my stray workloads are not run by providers that I don’t want to run them anymore.

–provider indemnification — in some cases providers would rather be completely unaware to a high degree of what is running — sure network ports and the like are fair game but some providers may not want to be able to run reports against your workload. Is there a model that allows us to encrypt “everything” and provide the other values above?

I’m sure there’s more. What’s missing from your cloud marketplace?

Advertisements

Part 3- Cloud’s Impact on Data Center Architectures

This is part 3 if my work in progress that I hope to release in more detail as a paper! Find Part 1 and Part 2 here.

ITArchitecture and the Business

It is often said that architecture is a study in tradeoffs. Perhaps its best to look at this architecture problem from the perspective of three different types of customers (or divisions, organizations, projects, etc) These groups are not holistic in nature — every company has parts of its business in each category.

cloudeffect_usersjpg_crop.jpg

All of these enterprises provide value to customers. The first provides value by its external connections and “systemness.” The second wants to build connections to what it has. The third is perhaps shrinking in market share and needs to rapidly redefine its cost structures. They share other similar characteristics at time. But they also differ. The quickly growing often cannot utilize “off the shelf” technologies because they don’t exist at the scale necessary, are too expensive, etc.

The second is looking at often legacy data and providing it in new and exciting formats to provide additional business value to itself by enabling others. It may have invested in large scale databases at centralized facilities. Perhaps it wants to move towards real-time analytics and provide these functions across the world.

The third may more rapidly embrace internal consolidation strategies or public cloud strategies. Optimization for this type of business is to quickly reduce complexity and increase operational optics by quickly determining what is core, what isn’t and devising a strategy to deal with it accordingly. This category is constrained by not only cost but generally the ability to go beyond a single IT platform layer.

These businesses must all make decisions about where to invest and where not to. How do you leverage what provides the competitive advantage vs something others have? Increasing connections, increasing value by making what you have available to more audiences, or by going “double-down” on what you need to do to stay in business?

Part 2 – Cloud’s Impact to Data Center and Application Architecture

Overview

Architecturally speaking IT is at the precipice of a large shift in complexity. The tools (like virtualization) that have developed over the course of the last decade are now embedded into most IT platforms. This provides benefits and challenges. The benefits are well-known to most. The challenges are being experienced by most. A “medium” data center of a few thousand machines in the 90s has grown to perhaps still only a few thousand machines, but an ever growing sea of virtual machines.

As horizontally scaled systems evolved, so did the IT process. Some shops had strong admins with the golden rule – don’t do things more than once, and if you do script it. They had a model to build machines, OS and applications.

A Shift of Control

Over the last couple years, there has been a shift of control. This shift has enabled faster time to market, perhaps increasing business value, and reduced the number of people involved in the IT process. The developer now has the opportunity to create a customized encapsulation of their platforms, and they are able to deploy almost anywhere.

The downside is loss of centralized IT control and with it a difficult to measure and even characterize aspect of service levels. The business has had to rationalize the tradeoffs between time to market and service levels. The business needs both to maintain competitive advantage. It will need the centralized IT and developer forces working together.

The Next Shift

One only has to look at the cloud for some examples of the coming shift. This shift not only effects technology but it also effects organizational and operational aspects of the IT equation. Another critical aspect of this shift is the balance of service level, control, and security. How do I deliver secure IT services in a distributed fashion at the service level I need?

This will all change again when we have global compute capacity just like we have a global network (The Internet) today. Don’t believe me? Check out The Economist and a few startups as well as Amazon.

Part 1 – Cloud’s Impact to Data Center and Application Architecture

First part of a series of three, and preview of a work in progress…

Introduction

The data center is constantly changing. New technologies, new business demands, environmental challenges mix together with the obsolescence of the physical aspects of hardware platforms, facilities, and even operational practices to provide a complex evolving environment.

There are some clear trends today:

– data is growing

– data centers are getting larger

– data center services are being provided by a growing abstraction of components

– and these services are increasingly delivered in real-time over distributed networks

This paper addresses infrastructure and platform architecture to address these challenges by utilizing some well known, proven patterns applied to slightly different strategies. A quick reading of the table of contents of books like Cal Henderson’s book highlights many of these patterns.

– Secure distributed execution/processing

– Centralized control of critical functions

– Abstraction and Encapsulation

– Replication and “Sharding”

– Eventual Consistency and Coherency

– Simplification and De-statefulness

These principles are often intermingled. How do I determine what critical functions over the complexity of the everything else? How do I develop a service I can run anywhere while ensuring performance and coherency?

The definition of data centers and DC infrastructure has also changed. A decade or more ago you might call this “data center architecture” but now its about architectures for service delivery, regardless of where those services exist.

Initial review of Felt F1 SL Team

I finally got a chance to do a shake down cruise on the new 2010 Felt F1 SL Team bike from the Argyle Club at Team Garmin-Transitions! Stiff in all the right places and an amazing ride. The 13-ish lb. bike feels like an extension of your body, easy to throw around but easy to control and drive towards where you need to go and as fast as you want to get there. I’ve ridden carbon before. What’s remarkable for me is the instantaneous response on steering — straight and fast like an arrow without feeling like your cramped in the frame.

My first set of carbon wheels (The “stock” Mavic Cosmic Carbone SLRs) surprised me with their sounds first off (I don’t notice now after ride 3.) They don’t sound like metal alloy wheels for sure! Second surprise they added 2-3 degrees or more to cornering. Very stiff and love to roll — they put wind resistance on notice for sure! The bike has the Garmin-Transitions and sponsors graphics but it’s not overdone. The white seat and bar tape add that extra “pro” touch!

Some Thoughts on Cloud Adoption Methdology

Cloud data centers focus on two main areas: time to market and cost management. Technically it provides a abstract platform to deploy services and APIs to manipulate those services. The enabler is automation and the ability to systemically optimize the platforms and services. How do I get here? Granted the focus here is moving forward with a transformational platform but its useful to look backwards to address other challenges beyond technology.

1)Assess and understand what you have – at least to the point of understanding your current operational and platform challenges. What are the business drivers causing stress in the system? What are the other challenges? Especially focus on the process. What will exist or need to be changed after the roadmap plays out?

2)Standardize – what do I need to offer? What are the minimum number of options to provide 80% coverage of my IT program? The other 20% is custom, doesn’t fit here, etc. The focus should be on developing a short menu of options with two to four core standards with two or three variations.

3)Develop the platform roadmap – how do IT services utilize the standards developed in 3? How do I consolidate onto this common platform (or set of platforms/?) How do users consume these services? How do I start to socialize and work with the “friendlies” to develop use cases and best practices?

4)Build the service delivery platform patterns, key configurations, etc. How does the network define service relationships? How do I provide key topological patterns for the platforms I’ve defined? What changes do I need to make within network and storage services to enable this new platform? How do services work in a “reduced state” environment? Work with key groups to map services to the new environment.

5)Develop and iteratively create the new platform – built it, use it, re-iterate and refine. Versioning is the key. Define the operational lifecycle beyond “ops” but include development and modular-based data center buildout. Help the “friendlies” onto the new platform. Measure – enable capacity management.

Cloud Deployment Best Practices

We just released our final “Sun” paper on Cloud Deployment and Packaging Concepts and Best Practices.  Have a look!

I pick on image-based deployments a bit.  My primary concern is two fold:

what you deploy is not always what you manage — the closer you can get alignment the better

don’t forget the model — whether programmatic or hand-tooled.

There’s some great “on-demand” technologies out there like DSC (dsc.kenai.com for now) that illustrate some alternative application packaging models.  I invite you to have a look.  There are benefits in scale, manageability, fail-in-place architecture, etc.

Therepy

So this is a little off-topic for me but I wanted to express some thoughts somewhere. Sun has indeed been a long and strange trip. I look forward to the world at Oracle and where that takes me.

I remember in 93/94 making a trip to the bay area — my first. I was supposed to be visiting HP for a training class on HPUX. I went but I had to make sure that I drove to Mountain View to visit the company’s whose Unix and systems I really had developed an appreciate for — that was Sun.

Since then its been a part of my life, approaching now almost two decades. I wonder what kids out of college do today when they go and visit some new area — the Silicon valley or one of the other valleys that have sprung up. I suspect that “tech” has lost its allure and draw a bit. But for me its something to always remember that feeling of respect and awe. It will continue — differently — but continue it will.

Jason Carolan, a proud to have been Sun Distinguished Engineer

Architectural Risk and Cloud Application Optimization

We just published a new paper that explores the area of risk and cloud application optimization. Does it make sense to refactor that existing application? Should I make it run on the cloud or optimize it? What is cloud computing application utopia?

You can find it here…

https://www.sun.com/offers/details/cloud_refactoring.xml 

Table of Contents…


Benefits of cloud computing 
Risks of cloud computing 
Reducing risk
Definitions 
Compatible with the cloud 
Runs in the cloud  
Optimized for the cloud 
Patterns 
Methodology 
Process overview 
Example: two-tier Web service
Initial assessment  
Optimize through refactoring
Optimize further  
Example: enterprise database cluster
Initial assessment 
Optimize through refactoring further 

Posted Sun’s Dynamic Infrastructure Attributes for the Cloud

See http://bit.ly/1MJoDW