Regulating Scale and Velocity — Wall Street vs Clouds Part 1

Looking at the “sell-off” and market hiccup that occurred Thursday last week formed an analogy to cloud computing for me. Trading is now a distributed process, something we were reminded of last Wed. Trades occurred, perhaps erroneously, perhaps not, that spanned various exchanges under “rules” apparently set by those who know best but seemingly find enforcement a struggle (as it often is in distributed systems.) Even though trading on some specific securities stopped on some exchanges, they were allowed to occur on others — and the “view” we have into this is a single asking price, apparently an accumulated or averaged set of values.

We learned this single view was not reality, not was this tied to a single security but effected many in ways that we are still trying to understand.

IT also looks for a similar singular view. Few actually achieve this. Even today, where still most workloads are ran inside fairly well-known controlled environments (e.g. data centers) that you own or control. Or perhaps the workloads have ventured out on the “cloud” and some risk is spread across providers and on-premise data centers. Or maybe all in the off-premise cloud for those that judge the risk (or type of workload, data, etc) acceptable.

But as above highlights changes are afoot. Increasingly workloads are deployed across on-premise and off-premise clouds. Scale is increasing. Scale? Just like billions of stock trades, there are billions and billions of objects now marshaled to provide a “view” of a service — and to provide such a service to end users.

I probably pick on both IT vendors and IT users equally. I believe all sides can do a better job in building and implementing technologies to help with scale, velocity, and the distributed nature that is network computing.

We need to increase investment in three areas:

1) understanding how to provide IT policy management, definition, and distributed regulation

2) improved service definitions as an extension of workload/payloads (could be a subset of #1 but there is a “state-change” when you go from modeled to deployed (e.g. instance immutability) — and this is a critical point to understand

3) creating more reflective architectures — meaning creating connected autonomous systems that are regulated by #1 and run #2, with concrete well defined interfaces and policy enforcement controls (and where events can be triggered both inside and outside the system.)

Dealing with these issues is not easy, especially in multi-vendor environments. It requires a strong partnership between vendor and user/customer.

More on this in the coming week, as well as my take on how to actually build solutions that incorporate these areas.


No comments yet

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: