Globalization Risks and Data Complexity Demand New Breed of Hybrid IT Management, Says Wikibon’s Burris
Globalization Risks and Data Complexity
Demand New Breed of Hybrid IT
Management, Says Wikibon’s Burris
Transcript of a discussion on how international cloud distribution models and global business
ecosystems factor into hybrid cloud management challenges and solutions.
Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript.
Sponsor: Hewlett Packard Enterprise.
Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the
Analyst podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host
Join us as we hear from leading IT industry analysts and consultants on how to best
make the hybrid IT journey to successful digital business transformation. Our next
interview examines how globalization and distributed business ecosystems factor into
hybrid cloud challenges and solutions.
We’ll explore how mounting complexity and the lack of multi-cloud services management
maturity must be solved in order for businesses to grow and thrive as global digital
Here to report on how international companies must factor
localization, data sovereignty and other regional factors into any
transition to sustainable hybrid IT is Peter Burris, Head of
Research at Wikibon, and he’s based in Palo Alto, California.
Peter Burris: Hi, Dana. Thank you very much for having me.
Gardner: Peter, companies doing business just in North
America, or doing their software development or that are
startups here, can have an American-centric view of things.
They may lack an appreciation for the global aspects of cloud
computing models. We want to explore that today. How much
more complex is doing cloud -- especially hybrid cloud -- when you’re straddling global
Global cloud challenges
Burris: There are advantages and disadvantages to thinking cloud-first when you are
thinking globalization first. The biggest advantage is that you are able to work in
locations that don’t currently have the broad-based infrastructure that’s typically
associated with a lot of traditional computing modes and models.
The downside of it is, at the end of the day, that the value in any computing system is not
so much in the hardware per se; it’s in the data that’s the basis of how the system works.
And because of the realities of working with data in a distributed way, globalization that
is intended to more fully enfranchise data wherever it might be introduces a range of
architectural implementation and legal complexities that can’t be discounted.
So, cloud and globalization can go together -- but it dramatically increases the need for
smart and forward-thinking approaches to imagining, and then ultimately realizing, how
those two go together, and what hybrid architecture is going to be required to make it
Gardner: If you need to then focus more on the data issues -- such as compliance,
regulation, and data sovereignty -- how is that different from taking an applications-
centric view of things?
Burris: Most companies have historically taken an infrastructure-centric approach to
things. They start by saying, “Where do I have infrastructure, where do I have servers
and storage, do I have the capacity for this group of resources, and can I bring the
applications up here?” And if the answer is yes, then you try to ultimately economize on
those assets and build the application there.
That runs into problems when we start thinking about privacy, and in ensuring that local
markets and local approaches to intellectual property management can be
But the issue is more than just things like the General Data Protection Regulation
(GDPR) in Europe, which is a series of regulations in the European Union (EU) that are
intended to protect consumers from what the EU would regard as inappropriate
leveraging and derivative use of their data.
Ultimately, the globe is a big place. It’s 12,000 miles or so
from point A to point B, and physics still matters. So, the
first thing we have to worry about when we think about
globalization is the cost of latency and the cost of
bandwidth of moving data -- either small or very large --
across different regions. It can be extremely expensive
and sometimes impossible to even conceive of a global
cloud strategy where the service is being consumed a few
thousand miles away from where the data resides, if there
is any dependency on time and how that works.
So, the issues of privacy, the issues of local control of
data are also very important, but the first and most important consideration for every
business needs to be: Can I actually run the application where I want to, given the
realities of latency? And number two: Can I run the application where I want to given the
Learn More About
Hybrid IT Management
Solutions From HPE
It can be extremely expensive
and sometimes impossible to
even conceive of a global
cloud strategy where the
service is being consumed a
few thousand miles away from
where the data resides, if
there is any dependency on
time and how that works.
realities of bandwidth? This issue can completely overwhelm all other costs for data-rich,
data-intensive applications over distance.
Gardner: As you are factoring your architecture, you need to take these local
considerations into account, particularly when you are factoring costs. If you have to do
some heavy lifting and make your bandwidth capable, it might be better to have a local
closet-sized data center, because they are small and efficient these days, and you can
stick with a private cloud or on-premises approach. At the least, you should factor the
economic basis for comparison, with all these other variables you brought up.
Burris: That’s correct. In fact, we call them “edge centers.” For example, if the
application features any familiarity with Internet of Things (IoT), then there will likely be
some degree of latency considerations obtained, and the cost of doing a round trip
message over a few thousand miles can be pretty significant when we consider the total
cost of how fast computing can be done these days.
The first consideration is what are the impacts of latency for an application workload like
IoT and is that intending to drive more automation into the system? Imagine, if you will,
the businessperson who says, “I would like to enter into a new market or expand my
presence in the market in a cost-effective way. And to do that, I want to have the system
be more fully automated as it serves that particular market or that particular group of
customers. And perhaps it’s something that looks more process manufacturing-oriented
or something along those lines that has IoT capabilities.”
The goal, therefore, is to bring in the technology in a way
that does not explode the administration, management,
and labor cost associated with the implementation.
The other way you are going to do that is if you do
introduce a fair amount of automation and if, in fact, that
automation is capable of operating within the time
constraints required by those automated moments, as
we call them.
If the round-trip cost of moving the data from a remote global location back to
somewhere in North America -- independent of whether it’s legal or not – comes at a
cost that exceeds the automation moment, then you just flat out can’t do it. Now, that is
the most obvious and stringent consideration.
On top of that, these moments of automation necessitate significant amounts of data
being generated and captured. We have done model studies where, for example, the
cost of moving data out of a small wind farm can be 10 times as expensive. It can cost
hundreds of thousands of dollars a year to do relatively simple and straightforward types
of data analysis on the performance of that wind farm.
Process locally, act globally
It’s a lot better to have a local presence that can handle local processing requirements
against models that are operating against locally derived data or locally generated data,
The goal is to bring in the
technology in a way that
does not explode the
management, and labor cost
associated with the
and let that work be automated with only periodic visibility into how the overall system is
working closely. And that’s where a lot of this kind of on-premise hybrid cloud thinking is
It gets more complex than in a relatively simple environment like a wind farm, but
nonetheless, the amount of processing power that’s necessary to run some of those
kinds of models can get pretty significant. We are going to see a lot more of this kind of
analytic work be pushed directly down to the devices themselves. So, the Sense, Infer,
and Act loop will occur very, very closely in some of those devices. We will try to keep as
much of that data as we can local.
But there are always going to be circumstances when we have to generate visibility
across devices, we have to do local training of the data, we have to test the data or the
models that we are developing locally, and all those things start to argue for sometimes
much larger classes of systems.
Gardner: It’s a fascinating subject as to what to push down the edge given that the
storage cost and processing costs are down and footprint is down and what to then use
the public cloud environment or Infrastructure-as-a-Service (IaaS) environment for.
But before we go into any further, Peter, tell us about yourself, and your organization,
Burris: Wikibon is a research firm that’s affiliated with something known as TheCUBE.
TheCUBE conducts about 5,000 interviews per year with thought leaders at various
locations, often on-site at large conferences.
I came to Wikibon from Forrester Research, and before that I had been a part of META
Group, which was purchased by Gartner. I have a longstanding history in this business. I
have also worked with IT organizations, and also worked inside technology marketing in
a couple of different places. So, I have been around.
Wikibon's objective is to help mid-sized to large enterprises traverse the challenges of
digital transformation. Our opinion is that digital transformation actually does mean
something. It's not just a set of bromides about multichannel or omnichannel or being
“uberized,” or anything along those lines.
The difference between a business and a
digital business is the degree to which data is
used as an asset. In a digital business, data
absolutely is used as a differentiating asset for
creating and keeping customers.
We look at the challenges of what does it mean
to use data differently, how to capture it differently, which is a lot of what IoT is about. We
Learn More About
Hybrid IT Management
Solutions From HPE
The difference between a
business and a digital business
is the degree to which data is
used as an asset.
look at how to turn it into business value, which is a lot of what big data and these
advanced analytics like artificial intelligence (AI), machine learning and deep learning
are all about. And then finally, how to create the next generation of applications that
actually act on behalf of the brand with a fair degree of autonomy, which is what we call
“systems of agency” are all about. And then ultimately how cloud and historical
infrastructure are going to come together and be optimized to support all those
We are looking at digital business transformation as a relatively holistic thing that
includes IT leadership, business leadership, and, crucially, new classes of partnerships
to ensure that the services that are required are appropriately contracted for and can be
sustained as it becomes an increasing feature of any company’s value proposition.
That's what we do.
Global risk and reward
Gardner: We have talked about the tension between public and private cloud in a
global environment through speeds and feeds, and technology. I would like to elevate it
to the issues of culture, politics and perception. Because in recent years, with offshoring
and looking at intellectual property concerns in other countries, the fact is that all the
major hyperscale cloud providers are US-based corporations. There is a wide
ecosystem of other second tier providers, but certainly in the top tier.
Is that something that should concern people when it comes to risk to companies that
are based outside of the US? What’s the level of risk when it comes to putting all your
eggs in the basket of a company that's US-based?
Burris: There are two perspectives on that, but let me add one more just check on this.
Alibaba clearly is one of the top-tier, and they are not based in the US and that may be
one of the advantages that they have. So, I think we are starting to see some new
hyperscalers emerge, and we will see whether or not one will emerge in Europe.
I had gotten into a significant argument with a group of people not too long ago on this,
and I tend to think that the political environment almost guarantees that we will get some
kind of scale in Europe for a major cloud provider.
If you are a US company, are you concerned about how intellectual property is treated
elsewhere? Similarly, if you are a non-US company, are you concerned that the US
companies are typically operating under US law, which increasingly is demanding that
some of these hyperscale firms be relatively liberal, shall we say, in how they share their
data with the government? This is going to be one of the key issues that influence
choices of technology over the course of the next few years.
Cross-border compute concerns
We think there are three fundamental concerns that every firm is going to have to worry
I mentioned one, the physics of cloud computing. That includes latency and bandwidth.
One computer science professor told me years ago, “Latency is the domain of God, and
bandwidth is the domain of man.” We may see bandwidth costs come down over the
next few years, but let's just lump those two things together because they are physical
The second one, as we talked about, is the idea of privacy and the legal implications
The third one is intellectual property control and concerns, and this is going to be an
area that faces enormous change over the course of the next few years. It’s in
conjunction with legal questions on contracting and business practices.
From our perspective, a US firm that wants to operate in a location that features a more
relaxed regime for intellectual property absolutely needs to be concerned. And the
reason why they need to be concerned is data is unlike any other asset that businesses
work with. Virtually every asset follows the laws of scarcity.
Money, you can put it here or you can put it there.
Time, people, you can put here or you can put there.
That machine can be dedicated to this kind of wire or
that kind of wire.
Scarcity is a dominant feature of how we think about
generating returns on assets. Data is weird, though,
because data can be copied, data can be shared.
Indeed, the value of data appreciates as we use it
more successfully, as we use it more completely, as
we integrate it and share it across multiple
And that is where the concern is, because if I have data in one location, two things could
possibly happen. One is if it gets copied and stolen, and there are a lot of implications to
that. And two, if there are rules and regulations in place that restrict how I can combine
that data with other sources of data. That means if, for example, my customer data in
Germany may not appreciate, or may not be able to generate the same types of returns
as my customer data in the US.
Now, that sets aside any moral question of whether or not Germany or the US has better
privacy laws and protects the consumers better. But if you are basing investments on
how you can use data in the US, and presuming a similar type of approach in most other
places, you are absolutely right. On the one hand, you probably aren’t going to be able
to generate the total value of your data because of restrictions on its use; and number
two, you have to be very careful about concerns related to data leakage and the
appropriation of your data by unintended third parties.
Gardner: There is the concern about the appropriation of the data by governments,
including the United States with the PATRIOT Act. And there are ways in which
governments can access hyperscalers’ infrastructure, assets, and data under certain
Learn More About
Hybrid IT Management
Solutions From HPE
Data is weird, because data
can be copied, data can be
shared. Indeed, the value of
data appreciates as we use it
more successfully, as we use it
more completely, as we
integrate it and share it across
circumstances. I suppose there’s a whole other topic there, but at least we should
recognize that there's some added risk when it comes to governments and their access
to this data.
Burris: It’s a double-edged sword that US companies may be worried about
hyperscalers elsewhere, but companies that aren't necessarily located in the US may be
concerned about using those hyperscalers because of the relationship between those
hyperscalers and the US government.
These concerns have been suppressed in the grand regime of decision-making in a lot of
businesses, but that doesn’t mean that it’s not a low-intensity concern that could bubble up; and
perhaps, it’s one of the reasons why Alibaba is growing so fast right now.
All hyperscalers are going to have to be able to
demonstrate that they can, in fact, protect their clients,
their customers’ data, utilizing the regime that is in
place wherever the business is being operated. [The
rationale] for basing your business in these types of
services is really immature. We have made enormous
progress, but there’s a long way yet to go here, and
that’s something that businesses must factor as they
make decisions about how they want to incorporate a
Gardner: It’s difficult enough given the variables and
complexity of deciding a hybrid cloud strategy when you’re only factoring the technical issues.
But, of course, now there are legal issues around data sovereignty, privacy, and intellectual
property concerns. It’s complex, and it’s something that an IT organization, on its own, typically
cannot juggle. This is something that cuts across all the different parts of a global enterprise --
their legal, marketing, security, risk avoidance and governance units -- right up to the board of
directors. It’s not just a willy-nilly decision to get out a credit card and start doing cloud
computing on any sustainable basis.
Burris: Well, you’re right, and too frequently it is a willy-nilly decision where a developer or a
business person says, “Oh, no sweat, I am just going to grab some resources and start building
something in the cloud.”
I can remember back in the mid-1990s when I would go into large media companies to meet
with IT people to talk about the web, and what it would mean technically to build applications on
the web. I would encounter 30 people, and five of them would be in IT and 25 of them would be
in legal. They were very concerned about what it meant to put intellectual property in a digital
format up on the web, because of how it could be misappropriated or how it could lose value.
So, that class of concern -- or that type of concern -- is minuscule relative to the broader
questions of cloud computing, of the grabbing of your data and holding it a hostage, for
There are a lot of considerations that are not within the traditional purview of IT, but CIOs need
to start thinking about them on their own and in conjunction with their peers within the business.
Learn More About
Hybrid IT Management
Solutions From HPE
All hyperscalers are going to
have to be able to
demonstrate that they can, in
fact, protect their clients, their
customers’ data, utilizing the
regime that is in place
wherever the business is
Gardner: We’ve certainly underlined a lot of the challenges. What about solutions? What can
organizations do to prevent going too far down an alley that’s dark and misunderstood, and
therefore have a difficult time adjusting?
How do we better rationalize for cloud computing decisions? Do we need better management?
Do we need better visibility into what our organizations are doing or not doing? How do we
architect with foresight into the larger picture, the strategic situation? What do we need to start
thinking about in terms of the solutions side of some of these issues?
Cloud to business, not business to cloud
Burris: That’s a huge question, Dana. I can go on for the next six hours, but let’s start here.
The first thing we tell senior executives is, don’t think about bringing your business to the cloud
-- think about bringing the cloud to your business. That’s the most important thing. A lot of
companies start by saying, “Oh, I want to get rid of IT, I want to move my business to the cloud.”
It’s like many of the mistakes that were made in the 1990s regarding outsourcing. When I would
go back and do research on outsourcing, I discovered that a lot of the outsourcing was not
driven by business needs, but driven by executive compensation schemes, literally. So, where
executives were told that they would be paid on the basis of return in net assets, there was a
high likelihood that the business was going to go to outsourcers to get rid of the assets, so the
executives could pay themselves an enormous amount of money.
The same type of thinking pertains here -- the goal is not to get rid of IT assets since those
assets, generally speaking, are becoming less important features of the overall proposition of
Think instead about how to bring the cloud to your
business, and to better manage your data assets,
and don’t automatically default to the notion that
you’re going to take your business to the cloud.
Every decision-maker needs to ask himself or
herself, “How can I get the cloud experience
wherever the data demands?” The goal of the
cloud experience, which is a very, very powerful
concept, ultimately needs to be able to get access to a very rich set of services associated with
automation. We need visible pricing and metering, self-sufficiency, and self-service. These are
all the experiences that we want out of cloud.
What we want, however, are those experiences wherever the data requires it, and that’s what’s
driving hybrid cloud. We call it “true private cloud,” and the idea is of having a technology stack
that provides a consistent cloud experience wherever the data has to run -- whether that’s
because of IoT or because of privacy issues or because of intellectual property concerns. True
private cloud is our concept for describing how the cloud experience is going to be enacted
where the data requires, so that you don’t just have to move the data to get to the cloud
Weaving IT all together
Think about how to bring the cloud
to your business, and to better
manage your data assets, and
don’t automatically default to the
notion that you’re going to take
your business to the cloud.
The third thing to note here is that ultimately this is going to lead to the most complex
integration regime we’ve ever envisioned for IT. By that I mean, we are going to have
applications that span Software-as-a-Service (SaaS), public cloud, IaaS services, true private
cloud, legacy applications, and many other types of services that we haven’t even conceived of
And understanding how to weave all of those different data sources, and all those different
service sources, into coherent application framework that runs reliably and providers a
continuous ongoing service to the business is essential. It must involve a degree of distribution
that completely breaks most models. We’re thinking about infrastructure, architecture, but also,
data management, system management, security management, and as I said earlier, all the way
out to even contractual management, and vendor management.
The arrangement of resources for the classes of applications that we are going to be building in
the future are going to require deep, deep, deep thinking.
That leads to the fourth thing, and that is defining the metric we’re going to use increasingly
from a cost standpoint. And it is time. As the costs of computing and bandwidth continue to drop
-- and they will continue to drop -- it means ultimately that the fundamental cost determinant will
be, How long does it take an application to complete? How long does it take this transaction to
complete? And that’s not so much a throughput question, as it is a question of, “I have all these
multiple sources that each on their own are contributing some degree of time to how this piece
of work finishes, and can I do that piece of work in less time if I bring some of the work, for
example, in-house, and run it close to the event?”
This relationship between increasing distribution of work, increasing distribution of data, and the
role that time is going to play when we think about the event that we need to manage is going to
become a significant architectural concern.
The fifth issue, that really places an enormous strain on IT, is how we think about backing up
and restoring data. Backup/restore has been an afterthought for most of the history of the
As we start to build these more complex applications that have more complex data sources and
more complex services -- and as these applications increasingly are the basis for the business
and the end-value that we’re creating -- we are not thinking about backing up devices or
infrastructure or even subsystems.
We are thinking about what does it mean to backup, even more importantly, applications and
even businesses. The issue becomes associated more with restoring. How do we restore
applications in business across this incredibly complex arrangement of services and data
locations and sources?
I listed five areas that are going to be very important.
We haven’t even talked about the new regime that’s
emerging to support application development and how
that’s going to work. The role the data scientists and
analytics are going to play in working with application
developers – again, we could go on and on and on.
There is a wide array of considerations, but I think all
of them are going to come back to the five that I
There’s a new data regime that’s
emerging to support application
development. How’s that going to
work — the role the data
scientists and analytics are going
to play in working with
Gardner: That’s an excellent overview. One of the common themes that I keep hearing from
you, Peter, is that there is a great unknown about the degree of complexity, the degree of risk,
and a lack of maturity. We really are venturing into unknown territory in creating applications that
draw on these resources, assets and data from these different clouds and deployment models.
When you have that degree of unknowns, that lack of maturity, there is a huge opportunity for a
party to come in to bring in new types of management with maturity and with visibility. Who are
some of the players that might fill that role? One that I am familiar with, and I think I have seen
them on theCUBE is Hewlett Packard Enterprise (HPE) with what they call Project New Hybrid
IT Stack. We still don’t know too much about it. I have also talked about Cloud28+, which is an
ecosystem of global cloud environments that helps mitigate some of the concerns about a single
hyperscaler or a handful of hyperscale providers. What’s the opportunity for a business to come
in to this problem set and start to solve it? What do you think from what you’ve heard so far
about Project New Hybrid IT Stack at HPE?
Key cloud players
Burris: That’s a great question, and I’m going to answer it in three parts. Part number one is, if
we look back historically at the emergence of TCP/IP, TCP/IP killed the mini-computers. A lot of
people like to claim it was microprocessors, and there is an element of truth to that, but many
computer companies had their own proprietary networks. When companies wanted to put those
networks together to build more distributed applications, the mini-computer companies said,
“Yeah, just bridge our network.” That was an unsatisfyingly bad answer for the users. So along
came Cisco, TCP/IP, and they flattened out all those mini-computer networks, and in the
process flattened the mini-computer companies.
HPE was one of the few survivors because they embraced TCP/IP much earlier than anybody
The second thing is that to build the next
generations of more complex applications -- and
especially applications that involve capabilities like
deep learning or machine learning with increased
automation -- we are going to need the
infrastructure itself to use deep learning, machine
learning, and advanced technology for determining
how the infrastructure is managed, optimized, and
economized. That is an absolute requirement. We
are not going to make progress by adding new
levels of complexity and building increasingly rich
applications if we don’t take full advantage of the technologies that we want to use in the
applications -- inside how we run our infrastructures and run our subsystems, and do all the
things we need to do from a hybrid cloud standpoint.
Ultimately, the companies are going to step up and start to flatten out some of these cloud
options that are emerging. We will need companies that have significant experience with
infrastructure, that really understand the problem. They need a lot of experience with a lot of
different environments, not just one operating system or one cloud platform. They will need a lot
of experience with these advanced applications, and have both the brainpower and the
inclination to appropriately invest in those capabilities so they can build the type of platforms
that we are talking about. There are not a lot of companies out there that can.
We are going to need the
infrastructure itself to use deep
learning, machine learning, and
advanced technology for
determining how the
infrastructure is managed,
optimized, and economized.
There are few out there, and certainly HPE with its New Stack initiative is one of them, and we
at Wikibon are especially excited about it. It’s new, it’s immature, but HPE has a lot of piece
parts that will be required to make a go of this technology. It’s going to be one of the most
exciting areas of invention over the next few years. We really look forward to working with our
user clients to introduce some of these technologies and innovate with them. It’s crucial to solve
the next generation of problems that the world faces; we can’t move forward without some of
these new classes of hybrid technologies that weave together fabrics that are capable of
running any number of different application forms.
Gardner: I’m afraid we’ll have to leave it there. We have been exploring how mounting
complexity and a lack of multi-cloud services management maturity must be solved so
businesses can grow and thrive as digital enterprises. And we have learned how global
companies must factor localization, data sovereignty, and issues around data control -- and
even intellectual property control -- if they are going to translate into a sustainable hybrid IT
So please join me in thanking our guest, Peter Burris, Head of Research at Wikibon. So good to
have you with us, Peter, I appreciate all your great thoughts.
Burris: You bet. Thanks, Dana.
Gardner: And Peter, if our listeners and readers want to follow you and gain more excellent
insights, where can they find you online?
Burris: I am on Twitter, @plburris. You can always send me an email to
email@example.com. SiliconANGLE is a parent company of Wikibon, and those are the two
best ways to reach me.
Gardner: And I am sure we will be looking forward to see you on theCUBE as well.
Gardner: Okay, also thank you to our audience for joining us for this BriefingsDirect Voice of the
Analyst discussion on how to best manage the hybrid IT journey to digital business
transformation. I’m Dana Gardner, principal analyst at Interarbor Solutions, your host for this
ongoing series of Hewlett Packard Enterprise-sponsored interviews.
Follow me on Twitter @Dana_Gardner and find more hybrid-focused podcasts at
briefingsdirect.com. Lastly, please pass this valuable content along to your IT community, and
do come back next time.
Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript.
Sponsor: Hewlett Packard Enterprise.
Transcript of a discussion on how international cloud models and distributed business
ecosystems factor into hybrid cloud management challenges and solutions. Copyright Interarbor
Solutions, LLC, 2005-2017. All rights reserved.
You may also be interested in:
• How mounting complexity, multi-cloud sprawl, and need for maturity hinder hybrid IT’s ability to
grow and thrive
• Inside story on HPC’s AI role in Bridges 'strategic reasoning' research at CMU
• Philips teams with HPE on ecosystem approach to improve healthcare informatics-driven
• Inside story: How Ormuco abstracts the concepts of private and public cloud across the globe
• How Nokia refactors the video delivery business with new time-managed IT financing models
• IoT capabilities open new doors for Miami telecoms platform provider Identidad IoT
• Inside story on developing the ultimate SDN-enabled hybrid cloud object storage environment
• How IoT and OT collaborate to usher in the data-driven factory of the future
• DreamWorks Animation crafts its next era of dynamic IT infrastructure