The title of the presentation may mean different things to different people. For some it may be about collaboration, for others communication, and perhaps for others the future of the enterprise. I hope that irrespective of what you’re expecting you’re still going to find some value here because I’ll be covering a little of all the above and hopefully challenging some of the commonly perceived views in the industry.What I am going to do is describe how we’re moving Kiwibank forward using our technology platform to enable social computing to address increasing complexity; how social computing provides a platform for information sharing that fundamentally differs from what I call old school information management; and how the future of operations management is going to be based on model management.
Now I am steering away a little from the Enterprise 2 title because it’s a hot topic and it’s just too hot. There’s a lot of hype and a lot of hyperbole and you can easily read more by doing an internet search. Instead I'm going to concentrate on addressing complexity.
What's important to me is the fundamental problem of enabling people to interact effectively. It's not easy. As the number of people increases the challenge increases dramatically. I've got the executive team from Kiwibank here and I've only added in a fraction of the total number of connecting lines. I ran out of time and energy to put in all the connections. As people come in the number of interactions increase substantially and this is why corporates spend far too much time talking about what they might do rather than actually doing.
You see evidence for this everywhere. There’s a blog run by a company called Cybaea in the UK. They’re been gradually refining data on employee productivity over the last few years and this came out recently for the FTSE-100 – it just confirms again that the bigger you get, the less productive you are. Note that these are logarithmic scales. For a factor of 10 increase you end up with a ¼ of the productivity. You can go to Cybaea's blog and view previous analyses run over S&P data as well - it's a similar story.
What’s it like at Kiwibank? Well here’s a graph of infrastructure and staff growth and yes, we've had rapid internal growth to meet our rapidly increasing customer base, and yes it means that the effort of prioritising and deciding what we do has become much, much greater.
And here is just a small part of our overnight processing. I’ve drawn stick figures on because as people we’re standing round the outside operating, managing and dare I say it changing what’s there. The point of social computing platforms is that they overcome complexity.
Social computing provides us with tools to improve communication, collaboration and delivery. This is an area where there really has been an exponential increases in capability over a short period of time. From voice to letter to printing press to telephony to fax to email to the web to today’s social computing world has been a meteoric last mile change.
And as if you weren’t convinced you can through in the products.
Now a bit of an aside here, but here’s an opportunity to look back into recent history. Who recognises any of these people?Vannevar Bush, Douglas Engelbart, and Sir Tim Berners-Lee – these along with many others created the computing infrastructure we rely on today.Vannevar Bush, one of the key characters behind the Manhattan project in the war and the first presidential science advisor, envisaged in 1945 that “Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them…”. for which he used the term Memex to describe what this thing would be. Essay“As We May Think”DouglasEngelbart: famous for The Mother of All Demos demonstrated a mouse, video conferencing and hypertext in a presenation in 1968. The whole thing is available to watch on Youtube – well worth it. We’re still struggling to achieve some of what he was doing in 1968 – at a time most people were working on card driven batch process systems.And, of course Sir Tim Berners-Lee – creator of the first web server and web browser.So there’s a lesson buried in here. Ideas happen relatively quickly in the process of development but there’s a lot else required to get something off the ground and this is where the benefits of communication really come to the fore. Today’s world allows us to communicate ideas much faster and you’d hope that the benefits of those ideas would come to market faster than it was for the web or the mouse. An example for this which I’m familiar with is community development because it’s been a dramatic force in the last 10 years for getting good ideas out in the open. This isn’t just opensource – that’s part of it but it’s more fundamentally about people talking/blogging/twittering about ideas on the net and getting feedback. I’ll make a plug here for Microsoft – the openness of the Microsoft Research and Development areas is fantastic. If you want to know what the company’s doing – just subscribe to any of the MSDN blogs – you figure it all out pretty quickly.
And just for reference, here’s a screenshot of the CERN web site circa 1993 displayed on a NextStep workstation running the WorldWideWeb browser. At this stage the CERN web site was a year old. It’s not dissimilar to a lot of intranet experiments that have run through companies since about 1996.Now here’s another important point.The protocols upon which the Internet operate were built to meet infrastructure restrictions that do not exist today, specifically very low bandwidth, expensive limited storage, and limited computing power, and they were not designed with security as a requirement.If we were to design the Internet today, the underlying protocols would be very different and this is having an enormous impact on us today, especially with ensuring a safe, secure experience online.But enough of that let’s move onto the Kiwibank experience.
So let’s talk about KiwibankYou probably already are aware that Kiwibank has a great Internet presence, but you might not be aware of is that we’ve recently been putting a lot of effort into our internal collaboration environment. Some more history: we’ve had a basic intranet – much like that original CERN site since shortly after we launched. From memory it was developed by our call center manager using Microsoft Word and save as HTML! It was then gradually extended as a secondary priority by various people including more bright young things from the call center and IT over the years but without any management and as a result it was pretty rough round the edges. Not only that but we were effectively developing multiple Intranet environments. Within IT we’d trialled Sharepoint 2007 right from the technical preview in 2006. One of our developers installed it and the rest of us in the development area were so impressed we started using it straight away. It wasn’t that it was perfect at any one role but that it was pretty good at a lot and it was easy to integrate into a .net environment like the one we have. So it gradually extended out over the whole of IT and we started using it to host business applications, but we still had the old Intranet as the point of entry to the new functionality.So we all knew we wanted to revamp it but without a clear commercial business case it did take awhile to progress.At the end of 2009 we finally went ahead with the rebuild.So what was produced?What we have now is broadly deployed social computing platform based upon sharepoint 2007. The management model is decentralised to reduce overhead but with control where it’s needed; 85 content owners were trained across the company to take responsibilities for their respective parts of the site. Every page has the name of the content owner on it.We’ve also supplemented the Sharepoint platform with K2’s Black Pearl human workflowproduct supplementing the internal sharepoint workflow management and Biztalk used for some system orchestration in some of our sharepoint business applications.A key factor in the success of the development has been the very iterative approach we took that ensured we got early successes, built up knowledge, refined our requirements quickly and ensured a high level of stakeholder communication. How did we do?We use the out of the box stats and a bimonthly survey to track progress. Average 9,600 page requests per day Directory (Green pages) most popular - 900 requests per day Followed by News, Banking, Community, HR & CEO’s blog 63% of Kiwibank use OurSpace2-4 times a day or more 69% of Kiwibank staff think OurSpace is “Excellent” or “Very good” 54% of Kiwibank staff say OurSpace has already made their job “somewhat” or “much” easier to do 81% of Kiwibank staff “Agree” or “Strongly Agree” that OurSpaceadds value for our business and Customers
Here’s an example of the functionality we host within the environment: a knowledge system that’s exposed out to the retail environment. We developed this within Sharepoint in 2008 and it’s now a part of the overall platform. It’s a critical function for the bank as it supports our retail environments which have to deal with a great diversity of products and systems across both NZ Post and Kiwibank.
Usage for AskMe has paralled the wider sharepoint platform and continues to grow rapidly.
Here’s the entry to our moves adds changes site. This is now a K2 managed workflow running through sharepoint.
Here’s our banking pages. We publish a bank update every couple of days and they’re well read across the whole bank. You’ll notice we don’t advertise an RSS feed at the moment. Many of use within IT use RSS but we haven’t come to a conclusion on how it should be used across the wider bank – something for the future.
And part of the directory – my manager, Ron. Notice the presence information – we’ve been trialling the Microsoft Communicator client through IT. Initial findings have been that the presence information is very useful, with instant messaging slightly less useful for us – probably because most corporate staff are within a short physical distance of each other. We’re going to extend the IM platform but for reasons related to our Cisco telephony environment we’re likely going to use the latest Cisco client that looks like to now integrates very tightly with the Microsoft platform.
Here’s our IT systems knowledge base. This in combination with our IT Wiki and the Microsoft System Center environment provides very rich information about the systems and services we offer.
Now we didn’t have enough expertise in house to complete the whole project so this was a combined effort with external resources.In comparison to our original Intranet, this time round we put special emphasis on usability using OptimalUsability for research, usability and information architecture.Over 400 people involved in surveys, card sorting, user testing, interviewsSpend time getting this right – it flows on navigation, process, content etcSpringload – already work with KBPublic siteReuse of design elements, save time and $Familiar for staffKnowledgeCue – provided detailed Sharepoint development and administration expertise.The vendors were on board from the start, working & meeting together.
Now company boundaries are softening in this new world. Thanks to all this social computing the communication paths into a company are also opening up and I don’t think most companies really know how to deal with this.In our case we have a social media working group with reps from across the company but especially marcoms and of course, IT, that meets weekly. We have a high level understanding documented with our executive team that there’s no single owner but instead a distributed stakeholding model involving many parties. For the moment we’re working on building up internal knowledge amongst employees of the consequences of public communication.
Here’s an example: Kiwibank’s Twitter page http://twitter.com/kiwibanknz - and notice that you see our staff as real people. They’re real people that I work with and this is what the new enterpise 2 technologies do. They emphasize the individual over the big, long disconnected product development process. It’s a level of transparency that creates opportunities but not without some danger.
Here’s another example: our personal finance manager heaps! What I like about this is the way we’re going about developing it – it’s done in collaboration with an external partner, Social Capital and we’ve gone for a very customer focussed approach. Here’s a screenshot of the heaps blog where you can follow progress on upcoming development and provide feedback for future change. This is what modern, agile, customer focussed development is all about.
You have to be quick with this game though – here’s http://twitter.com/kiwibank - voxy.co.nz have already taken this one as a generic Kiwi banking site. We were a bit slow there… obviously the other banks were as well.Ownership takes on a new meaning in the public internet. Whether it’s twitter or facebook or any other site, just what does it mean to say you have a presence? It’s too easy for others to grab your identity, it’s too costly at the moment to stop.
Now, this focus on people has an interesting outcome – you can use the social collaboration information to correlate individual’s contributions to projects or topics. Tools like sharepoint and visual studio team system – which integrates with sharepoint – and now the upcoming microsoft project server - provide you with a way to query that data and you can use it to help manage the IP of the organisation. You don’t want too many projects with only one or two people working on them and conversely you don’t want too many individuals working solely or almost solely on too many projects. It’s a way of keeping track of your key man risk.
Here’s a further example of this from the Internet that you might not be familiar with: stackoverflow. Very, very simple to use. Very, very popular amongst the developer community. You ask a question and get answers back. The more you participate, the higher your status. Answers get rated by peers so individuals rise up and fall down depending upon the content of their responses. It’s an excellent example of a meritocracy, a very transparent, simple meritocracy. Maybe this is how we’ll be working within our company environments in the future?
One thing you get from these models is a clear understanding of what’s going on.Let’s check what’s hot.C#’s hot – nearly twice as hot as Java. iPhone’s hot.
So what do we do on this uphill battle to get more productive?It doesn’t matter how you approach it, to do this you need information and you need intelligent analysis supporting decision making – some might call it strategy – but in my opinion a culture of constant, iterative smart decision making beats strategy anyday.
And that gets us onto information in an Enterprise 2 environment.I think there’s two anti-patterns of modern BI.The Data Management Anti-PatternThe normData -> Managed store -> Improved Access -> Presentation= Intelligence 0The No Programming Anti-PatternDumb down BI to the point it’s of no valueWhat enterprise 2 platforms do is make information available for people to consume at their desktops.Most BI efforts concentrate on a bottom up approach to data management, obsessed with cleaning but with little idea of what is actually going to be valuable. They end up stifling analysts when they should be empowering them!So the alternative is to make data available. Forget the cubing, forget the scrubbing – you don’t even need all the data – what you need is enough precision to make a call with an understanding of the uncertainty in the decision. And for each and every problem there’s likely going to be a different approach required to come to a solution. So giving power analysts tools that enable them to do real work becomes more important that drag and drop interfaces that provide an entry level view of the data. What should central BI teams do? They should make the data available from the source systems. The source systems should have the correct data and if not, then it’s in the source systems that it should be fixed, and lastly, they should own the model that tells an analyst where data is and how it relates to something else.
Interesting thing here is that pretty much all these steps are hard.Enterprise BI doesn’t empower data collection – it’s reliant upon very structured, expensive, extraction, transformation, staging processes.Model creation isn’t easy – relational models are not simple to build, nor really appropriate for giving context to aggregated values. Cubes are hierarchical and a sensible design choice except that the design and creation process is very non-trivial and again relies upon a centralised BI team.Logic is not clearly defined – there’s an enormous diversity of practice across practitioners. If you were to ask me what logic means here, it’s really the definition of the KPIs – these display the outcome of the logic.And we won’t even go into the ability to execute…
So when I think Enterprise 2 and business intelligence supporting decision making for the future I find myself thinking that competition is going to drive us towards more analytic approaches that rely on more data transparency and the collective intelligence of the individuals that use that data in organisations.
Now I mentioned code.Who here has heard of R? Well the two guys in The NY Times article are Aucklanders; they’re locals! They started the R project off by writing the first implementation. The picture on the right is a recent article from Forbes which also talks about R and a commercial derivation of R from the chap that created SAS.R is an opensource programming language designed especially to support stats and analysis work. It’s become popular around the world in the last few years - originally in the academic world but now in commercial environments especially finance. Yes you have to code – but anything you do with R is going to be done by what you might call a power analyst, and it’s through taking this approach you’re going to get ahead of your competitors. In our case we’ve used R at Kiwibank within IT for analysing data recorded from Internet Banking – I’d like to see it work further through the organisation but there’s some work required there.
And from the Forbes article comes this quote. This is important stuff – Web 2.0 companies like Facebook and Twitter compete based upon the data embedded within their systems. Under the Enterprise 2.0 model the same thing matters. To get the most benefit out of them you need good statistical analysis.
Now, statistical analysis isn’t the only area of analytics with strong NZ involvement.Who’s heard of Weka? Turns out that the University of Waikato created the Weka project for data mining/machine learning which has become very popular globally. The self professed world leading open-source data mining package, Rapid Miner, uses Weka for the underlying machine learning algorithms.Not sure if there’s any actual local companies using this but I suspect not and it’s shame to think that such great knowledge is available locally in the academic world but probably isn’t available in NZ companies.It’s not just in the opensource world that you get great tools by the way. There’s a lot of companies here running Microsoft’s SQL Server. SQL Server runs a very, very capable data mining engine inside it as part of SQL Server Analysis Services. They’ve also done a great job of exposing that functionality out through a very easy to use data mining add-in to Excel but it constantly amazes me how little is known about it when I mention it or show it to people. Microsoft actually provide a wealth of underused analytic tools that most of you will be partially if not wholly licensed for at the moment – another example is the optimisation framework called Solver Foundation.
So we’re onto that last part of the presentation: model based management. As our infrastructures become more complex we facing an increasing battle to monitor and manage them. The traditional approach is to put in a monitoring solution something like Nagios or Zenoss. The thing about this is that while you collect lots of data and make it available for analysis, you have no context to it.This is analogous to the BI discussion. You need access to the source data and you need context for how it relates.
Here’s a Microsoft slide from a recent presentation by Bob Muglia, Microsoft President Server and Tools Division. This is the way the world looked recently. Models were owned by development teams and delivered to operations.
But it’s wrong!The truth lives in the real world, it lives in the data centers, in the actual deployed systems, not in the intentions written down at the beginning of projects.So what Microsoft have done is move the team that was producing modelling tools into the System Center team responsible for building the operational tools. What’s System Center? It’s an operations management suite from Microsoft that includes components for monitoring/modelling/alerting/deploying and so on.
So let me try to make this tangible for you. Here are two diagrams from our Systems Center Operations Manager environment. What it shows may not seem special at first sight, but actually it is, or at least I think so. What you see at the top left is a dashboard and at the bottom right it’s a graphical model. The model is connected to an underlying hierarchy of classes called the System Definition. Against that model are health rules that show whether something is working correctly or not. The important thing about this is that the model connects the business view with the engineering view. When something goes wrong at an engineering level the context is provided by the model hierarchy. Here I have an instance of a scheduling function that isn’t operating correctly in the staging view, and a summary dashboard that shows a high level warning indication for Staging (and Collections which itself depends upon Staging). Interestingly the dashboard also shows a server memory issue bubbling up to affect our origination and fraud procesess – I know this because at the time I took the screen shots clicked on them and opened up a health explorer to drill down into the individual health monitors to find out what was going wrong. There’s knowledge base information stored against these health indicators to let you know what’s going wrong and what you should do about it. There’s also workflows to automatically correct or maybe undertake specific actions to alert and recover. I should mention that the diagrams are a small extension to System Center – you can do it with Visio 2010 and Sharepoint 2010 or you can use an add-in product called SavisionLiveMaps. It’s important to understand though that data and logic behind this is all part of System Center.
This is an example of using SCOM as a provider of detailed performance data from across the whole of the network. You could say this is analogous to an MIS data warehouse – but that’s only partially true. What it does is allow you to collect data from one easy to get to place but what is stored isn’t aggregated or transformed. It’s still the raw source data – but with a model that gives me information on how one bit of data connects with another. What it means for me is that I can easily pull information out and manipulate to give meaning. Here I’m getting the time it takes to perform an action against our core banking system. Now I haven’t had to specify the exact web servers from which to get this information – I just told SCOM I wanted the data to come from the class of servers that are talk to our core banking environment and it figured out how to get the data. Then I do a local aggregation to correlate performance with time of day and the arrival rate. The thing is that I’ve done this in code. Don’t ever be scared to do something in code if you want to get that step ahead of your competitors. (Quick note to say I’m not using R – I actually like Microsoft’s new F# language - it’s especially good at analytics but it also gives full access to all the great stuff on the .net platform which makes it useful in an environment like ours.) In this case the image is a heat map with time going from midnight to midnight across the top, and arrival rate going from 0 at the top to the maximum value we obtained at the bottom. What it’s telling me is that night time performance is affected by overnight batch jobs when arrival rates get high. No big surprise but it does quantify it for me and it’s a clear pointer to the fact that at the moment our priority for performance tuning in the core banking environment is towards the overnight batch jobs rather than the daytime real time activity.
CIO Conference July 2010
Products and Services
Visual design and
User research, IA
Solution design and build
Company Boundaries Are
Social Computing Focuses on the
• Code – Team System
• Documents, Wiki, Blog – Sharepoint
Systems of more importance have had more change over time. Correlate change to
individuals and account for shared knowledge to identify where people have worked on
many small projects, and therefore present significant key man risk.
Meritocracy Through Transparency
722,189 questions as at 14th June 2010,
=> 84% answer rate
Getting more productive
Business Intelligence in Enterprise 2.0
De-centralise the Data!
“Data Management First”
+ Logic (=Math)
+ Ability to Execute
I keep saying the
sexy job in the next
ten years will be statisticians.
Hal Varian, Google’s Chief Economist, The
McKinsey Quarterly, January 2009
Enterprise Management 2.0
• Infrastructure (apps and kit) complexity
• Melding of desktop/server infrastructure
• Enterprise monitoring -> Enterprise Modelling
• Truth lies in the datacentre…
• Modelling should move from dev -> prod