1) The document discusses improving decision making support by linking database results to simulations.
2) Currently, decision making involves disjointed tools like spreadsheets and simulations that do not interoperate.
3) The author proposes a new language called SimQL, similar to SQL, that would allow querying and linking simulations in the same way databases are currently used.
4) This would help merge the disciplines of databases and simulations to better support decision making by providing access to simulations and future predictions from within database systems.
Tutorial on how Software can be valued as a business. Three sections. The roles of Intellectual capital, comprised of human capital and intellectual property (IP). Protectiing IP. Routine and non-routine profit Allocation. The final section deals with taxtation and how multinationals avoid taxation be moving rights to profit from software to tax havens
Lensfield is a desktop and filesystem-based tool designed as a “personal data management assistant” for the scientist. It combines distributed version control (DVCS), software transaction memory (STM) and linked open data (LOD) publishing to create a novel data management, processing and publication tool. The application “just looks after” these technologies for the scientist, providing simple interfaces for typical uses. It is built with Clojure and includes macros which define steps in a common workflow. Functions and Java libraries provide facilities for automatic processing of data which is ultimately published as RDF in a web application. The progress of data processing is tracked by a fine-grained data structure that can be serialized to disk, with the potential to include manual steps and programmatic interrupts in largely automated processes through seamless resumption. Flexibility in operation and minimizing barriers to adoption are major design features.
As data science workloads grow, so does their need for infrastructure. But, is it fair to ask data scientists to also become infrastructure experts? If not the data scientists, then, who is responsible for spinning up and managing data science infrastructure? This talk will address the context in which ML infrastructure is emerging, walk through two examples of ML infrastructure tools for launching hyperparameter optimization jobs, and end with some thoughts for building better tools in the future.
Originally given as a talk at the PyData Ann Arbor meetup (https://www.meetup.com/PyData-Ann-Arbor/events/260380989/)
SkyWatch is all about making Earth-observation data digestible and accessible. They believe that creating a single place to bring together the planet’s observational datasets will make new waves in geospatial analytics. In this session, we'll take a look at how companies can take advantage of cloud-native workflows to enable access and analysis across planetary-scale datasets. You’ll hear how SkyWatch leveraged AWS serverless technologies to build a company that transforms petabytes of sensor data from space into useful information. You'll also learn how Sinergise is merging a variety of data streams through products like Sentinel Hub, and creating actionable intelligence for its users.
Lessons Learned Replatforming A Large Machine Learning Application To Apache ...Databricks
Morningstar’s Risk Model project is created by stitching together statistical and machine learning models to produce risk and performance metrics for millions of financial securities. Previously, we were running a single version of this application, but needed to expand it to allow for customizations based on client demand. With the goal of running hundreds of custom Risk Model runs at once at an output size of around 1TB of data each, we had a challenging technical problem on our hands! In this presentation, we’ll talk about the challenges we faced replatforming this application to Spark, how we solved them, and the benefits we saw.
Some things we’ll touch on include how we created customized models, the architecture of our machine learning application, how we maintain an audit trail of data transformations (for rigorous third party audits), and how we validate the input data our model takes in and output data our model produces. We want the attendees to walk away with some key ideas of what worked for us when productizing a large scale machine learning platform.
Tutorial on how Software can be valued as a business. Three sections. The roles of Intellectual capital, comprised of human capital and intellectual property (IP). Protectiing IP. Routine and non-routine profit Allocation. The final section deals with taxtation and how multinationals avoid taxation be moving rights to profit from software to tax havens
Lensfield is a desktop and filesystem-based tool designed as a “personal data management assistant” for the scientist. It combines distributed version control (DVCS), software transaction memory (STM) and linked open data (LOD) publishing to create a novel data management, processing and publication tool. The application “just looks after” these technologies for the scientist, providing simple interfaces for typical uses. It is built with Clojure and includes macros which define steps in a common workflow. Functions and Java libraries provide facilities for automatic processing of data which is ultimately published as RDF in a web application. The progress of data processing is tracked by a fine-grained data structure that can be serialized to disk, with the potential to include manual steps and programmatic interrupts in largely automated processes through seamless resumption. Flexibility in operation and minimizing barriers to adoption are major design features.
As data science workloads grow, so does their need for infrastructure. But, is it fair to ask data scientists to also become infrastructure experts? If not the data scientists, then, who is responsible for spinning up and managing data science infrastructure? This talk will address the context in which ML infrastructure is emerging, walk through two examples of ML infrastructure tools for launching hyperparameter optimization jobs, and end with some thoughts for building better tools in the future.
Originally given as a talk at the PyData Ann Arbor meetup (https://www.meetup.com/PyData-Ann-Arbor/events/260380989/)
SkyWatch is all about making Earth-observation data digestible and accessible. They believe that creating a single place to bring together the planet’s observational datasets will make new waves in geospatial analytics. In this session, we'll take a look at how companies can take advantage of cloud-native workflows to enable access and analysis across planetary-scale datasets. You’ll hear how SkyWatch leveraged AWS serverless technologies to build a company that transforms petabytes of sensor data from space into useful information. You'll also learn how Sinergise is merging a variety of data streams through products like Sentinel Hub, and creating actionable intelligence for its users.
Lessons Learned Replatforming A Large Machine Learning Application To Apache ...Databricks
Morningstar’s Risk Model project is created by stitching together statistical and machine learning models to produce risk and performance metrics for millions of financial securities. Previously, we were running a single version of this application, but needed to expand it to allow for customizations based on client demand. With the goal of running hundreds of custom Risk Model runs at once at an output size of around 1TB of data each, we had a challenging technical problem on our hands! In this presentation, we’ll talk about the challenges we faced replatforming this application to Spark, how we solved them, and the benefits we saw.
Some things we’ll touch on include how we created customized models, the architecture of our machine learning application, how we maintain an audit trail of data transformations (for rigorous third party audits), and how we validate the input data our model takes in and output data our model produces. We want the attendees to walk away with some key ideas of what worked for us when productizing a large scale machine learning platform.
Polyglot Persistence and Database Deployment by Sandeep Khuperkar CTO and Dir...Ashnikbiz
This presentation covers What is Polyglot Persistence? And how should you choose the right Database Technology for a scalable architecture and introduction to Emerging world of Polyglot Persistence using open source database ecosystem.
Polyglot Persistence is not something which can be used as an out of the box product, but instead needs to be designed for each individual enterprise for its unique Data Architecture.
An Active and Hybrid Storage System for Data-intensive ApplicationsXiao Qin
Since large-scale and data-intensive applications have been widely deployed, there is a growing demand for high-performance storage systems to support data-intensive applications. Compared with traditional storage systems, next-generation systems will embrace dedicated processor to reduce computational load of host machines and will have hybrid combinations of different storage devices. We present a new architecture of active storage system, which leverage the computational power of the dedicated processor, and show how it utilizes the multi-core processor and offloads the computation from the host machine. We then solve the challenge of applying the active storage node to cooperate with the other nodes in the cluster environment by design a pipeline-parallel processing pattern and report the effectiveness of the mechanism. In order to evaluate the design, an open-source bioinformatics application is extended based on the pipeline-parallel mechanism. We also explore the hybrid configuration of storage devices within the active storage. The advent of flash-memory-based solid state disk has become a critical role in revolutionizing the storage world. However, instead of simply replacing the traditional magnetic hard disk with the solid state disk, researchers believe that finding a complementary approach to corporate both of them is more challenging and attractive. Thus, we propose a hybrid combination of different types of disk drives for our active storage system. An simulator is designed and implemented to verify the new configuration. In summary, this dissertation explores the idea of active storage, an emerging new storage configuration, in terms of the architecture and design, the parallel processing capability, the cooperation of other machines in cluster computing environment, and the new disk configuration, the hybrid combination of different types of disk drives.
This webinar is going to cover what is a digital twin and how all stakeholders can benefit from their functionality. You will learn how model-based systems engineering enables digital engineering. Your host will discuss use cases, a realistic look at digital engineering and digital twins, and how you can use Innoslate to get started.
The Agenda
Here's what we're covering.
What is a Digital Twin
Benefits of Digital Twin
The Digital Engineering Path Enabled by MBSE
AR + MBSE Software
A More Realistic Digital Twin
Getting You Started with Digital Twins
Question Answer Session
Visualizing big data in the browser using sparkDatabricks
In this talk at 2015 Spark Summit East, @mhfalaki from Databricks shows how Spark can be used along with open source visualization tools such as, D3, Matplotlib, and ggplot, to address challenges in visualizing large data sets.
Tutorial on how software can be valued in a global business setting. IThe last section covers multinational tax avoidance. enabled by moving rights to profit from software assets into tax havens.
This is oldish set on an engineering-based approach to sharing diverse and heterogeneous data. It complements a paper about to be published in a Springer collection by Tansel et al. as well as recent Health care record systems discussions.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
National Security Agency - NSA mobile device best practices
Quantifying thefuture
1. Improving Decision-Making Support
by Linking Database results to Simulations
Gio Wiederhold
Stanford University
July 2011
Gio Wiederhold SimQL 1
2. Problem : Mismatch
Database Technology should support Decision-Making
• What does database technology do?
o Databases provide information about past events
» Consistent
» Reliable
» Fast
• What does a decision-maker do?
o Guess how decisions will affect the future
» Multiple possibilities
» Uncertainty
» Slow, manual, multiple tools
Gio Wiederhold SimQL 2
8/17/2012 Gio: SimQL
3. Information Systems should also
Project into the Future
past now future
time
Support of decision-making requires dealing with the future ,
as well the past
• Databases deal well with the past
• Sensors can provide current status
• Spreadsheets, simulations deal with the likely futures
Information systems should be able to combine all three
Gio Wiederhold SimQL 3
8/17/2012 Gio: SimQL
4. Decision-making (DM)
Analyze Alternatives
• Current Capabilities
• Future Expectations
• Planning for them
now future
Process tasks:
• List resources
• Enumerate alternatives
• Prune alternative
• Compare alternatives
8/17/2012 Gio: SimQL Gio Wiederhold SimQL 4
5. Current Processes
• Data conversion to files for spreadsheets.
• Model building and testing by analysts
• Planning for likely future scenarios
• Recording expected results .
• Data collection • Comparing many scenarios .
• Data validation • Finding the best plans .
• Data integration • Advising the actual .
• Information selection decision maker
• Data reduction & summarization
• File generation for analysts
Gio Wiederhold SimQL 5
8/17/2012 Gio: SimQL
6. Progress in Data Integration
Information Integration has progressed in supporting
Decision Making
1. Integrate data from distributed sources
o Issues: inconsistency of scope and timing
2. Capture new relationships
o Often requires expert inter-domain knowledge
3. Include current sensor data
o Select streaming data
4. include predictions about future courses
******* A new, potentially major topic *******
Gio Wiederhold SimQL 6
8/17/2012 Gio: SimQL
7. DM support is disjoint
does not interoperate
Planning Science
extensions to move
Distribution to networked support
are also disjoint
8/17/2012 Gio: SimQL Gio Wiederhold SimQL 7
8. Current state of DM Support
past now future
organized support disjointed support
x17 @qbfera
ffga 67 .78 jjkl,a
nsnd nn 23.5a Intuition +
Data integration • Spreadsheets
• Resource allocations
• Explicit simulations
Databases
various point assessments
distributed, heterogeneous
Past future time
Gio Wiederhold SimQL 8
8/17/2012 Gio: SimQL
9. Prediction Requires Tools
E-mail this book,
Alfred Knopf, 1997
8/17/2012 Gio: SimQL Gio Wiederhold SimQL 9
10. Requirements for DM
• Ubiquitous access to simulations
of a wide variety of types
• Rapid response to parameter changes
o Access to up-to-date facts
o May need High-Performance recomputation
• Model, scenario, and choice retention
o Analysts’ planning to be reused
» But updatable
Gio Wiederhold SimQL 10
8/17/2012 Gio: SimQL
11. How to merge 2 disciplines
• Databases
o High-level languages
» Data descriptions
o Drive detailed processes
o Intentional
• Simulations & spreadsheets
o High-level languages
» Model desriptions
o Parameter driven
o Extensional
Gio Wiederhold SimQL 11
8/17/2012 Gio: SimQL
12. Integration concept
• Enable intentional simulation access
o Follow database model
» Similar to data description
o Provide interfaces
»To support needed processes
Create SimQL similar to SQL
schema & links to access procedures
Gio Wiederhold SimQL 12
8/17/2012 Gio: SimQL
13. Transform Data to Information
Database oo middle-
-)
Design management
Schema SQL user
Data Reports
:-(
Collection
Model value-added
:-)
Design services
Data-driven decision-makers
Modeling
Plans
o o
8/17/2012 Gio: SimQL Gio Wiederhold SimQL 13
14. Language implementation
Stanford Experiment uses an existing SQL parser:
1. Replace the SELECT verb with ESTIMATE;
2. Remove the UPDATE statement. Nothing persists
3. Replace CREATE DATABASE with CREATE MODEL;
4. Add to the CREATE attributes IN, OUT, and INOUT;
5. Add a REGISTER statement to identify resources;
6. Replaced SQL’s functions code generators that access
stored data with functions that deliver the
a. Query IN parameters to various simulations
b. Collect the data specified as OUT parameters
c. Return the result.
Gio Wiederhold SimQL 14
8/17/2012 Gio: SimQL
15. Examples
SQL:
SELECT Temperature, Cloudcover, Windspeed,
Winddirection FROM
WeatherDB WHERE Date = `yesterday' AND
Location = `ORD'.
SimQL:
ESTIMATE Temperature, Cloudcover, Windspeed,
Winddirection FROM
WeatherSimulation WHERE Date = `tomorrow' AND
Location = `ORD'.
Gio Wiederhold SimQL 15
8/17/2012 Gio: SimQL
16. Available Functions
1. Continously executing: weather prediction
o SimQL result reports best match samples
2. Execution specific to query: Spreadsheet what-if assessment
o may require HPC power for adequate response
3. Past simulations collect results in a base: materials
o performs inter- or extra-polations to match query parameters
4. Combinations, i.e., 2. + 3.: top layer simulation using stored
partial lower level results: weapon performance in new setting
5. Human-in-the-loop: Wrapper for Amazon’s Mechanical Turk
Note
• A simulation service program can be written in any language
• A simulation service must be compliant to the interface spec.
Gio Wiederhold SimQL 16
8/17/2012 Gio: SimQL
18. Interfaces enable integration:
SimQL to access Simulations
past now future
time
Databases, Simulations,
accessed via SQL or
XML, CORBA compliant accessed via SimQL and
wrappers compliant wrappers
Msg
systems,
sensors
Gio Wiederhold SimQL 18
8/17/2012 Gio: SimQL
19. Current State of SimQL research
GUI
collect language
requirements
Test Application
wrapper wrapper wrapper
Spreadsheets Weather Engineering
Gio Wiederhold SimQL 19
8/17/2012 Gio: SimQL
21. More to be done
• Stanford experiment only produced point results.
• A decision maker would estimate multiple scenarios
1. Collect results identified with parameters
2. Provide search functions to compare results
1. Consider time lines for result synchronization
3. Support pruning of low-value results
4. Deliver only high-value results to decision-maker
Gio Wiederhold SimQL 21
8/17/2012 Gio: SimQL
22. Use of Simulation Results
0.6 0.3 0.2
0.1 0.07
0.5 0.03
0.5 0.5 0.3
0.1 0.2
time
0.4 0.2 0.1 0.1 prob
Simulation results can be composed for
alternative Courses-of-actions
Composition should include computation
and recomputation of likelihoods
Likelihoods change as now moves forwards
and eliminates earlier alternatives.
Gio Wiederhold SimQL 22
8/17/2012 Gio: SimQL
23. Estimates have probabilities
• p=30% chance of rain
• Flight p=91% likely to arrive with 15 min of ETA
• Interest rate p=50% same, p=25% 1% higher, … .
• Employee p=50% returns to work in a week, … .
• Project p=10% completed in time, …
• Spreadsheets can compute alternative values
with such data provided by the model builder,
not the SimQL user.
Gio Wiederhold SimQL 23
8/17/2012 Gio: SimQL
24. The branches can be labeled with probabilities,
then assessed using the outcome with values
prob
value
0.1 100 0.3 1000
Next period alternatives
1200 0.4 2000
0.5
and subsequent periods 600
0.6 0.1 5000
66
1266 0.1 0.3 1100 0.2
500
1000
134 0.2
0.3 200 200 0
0.1
-1086 -420
0.07 0 -6000
0.4 -1220
0.2
-820 0.13
-400 -3000
Values
past now future
time
Gio Wiederhold SimQL 24
8/17/2012 Gio: SimQL
25. Integrating data & planning support will make
our data reusable and much more valuable
A Pruned Bush
Re-assess as time 100 ? ?
marches forward ! 1200 600
1000
1266 ? 2000
1100 500
66 5000
200 200
1000
0
0
past now future
time
Spreadsheets,
Databases, . . . other simulations,
8/17/2012
Msgs
Gio: SimQL
Gio Wiederhold SimQL 25
sensors
26. Even the present needs SimQL
point-in-time for
last recorded observations situational
assessment
simple simulations
to extrapolate data
past now future
time
Is the delivery truck in X?
Not all data are current: • Is the right stuff on the truck?
• Will the crew be at X?
• Will the forces be ready to accept delivery?
8/17/2012 Gio: SimQL Gio Wiederhold SimQL 26
27. Use of Simulation Results
Simulation results can be composed for
Alternative Courses-of-actions
Composition should be seamless, elegant,
with computation and recomputation of
likelihoods
Results change as now moves forwards and
eliminates earlier alternatives.
Gio Wiederhold SimQL 27
28. Summary
Databases Simulations should
• serve clients via SQL by • serve clients via SimQL by
Sharing a Model (The Schema)
Sharing a Model (research q.)
A query language over the model A query language over the model
the SQL interface enables a SimQL interface enables
• independence of • independence of
application development application development
DBMS technology development simulation technology develop’t
reuse of infrastructure reuse of infrastructure
Today Objective
• most new systems use a • build information systems
DBMS for data storage combining DBMS, Simulations
even with less performance, even with less performance,
inability to handle all problems, inability to handle all problems,
but enough of them well enough. but enough of them . . .
Gio Wiederhold SimQL 28
8/17/2012 Gio: SimQL
29. Further research questions
• How to move seamlessly from the past to the future?
• How can multiple futures be managed (indexed)?
• How can multiple futures be compared, selected?
• How should joint uncertainty be computed?
• How can the NOW point be moved automatically?
Gio Wiederhold SimQL 29
8/17/2012 Gio: SimQL
30. Future information systems
Combine data from the past, with current data,
knowledge, and predictions into the future
oo
o o
o o
Assessment of the
values of alternative
possible outcomes
Gio Wiederhold SimQL 30
8/17/2012 Gio: SimQL
31. SimQL research questions
• How little of the model needs to be exposed?
• How can defaults be set rationally?
• How should expected execution cost be reported?
• How should uncertainty be reported?
• Are there differences among application areas that
require different language structures?
• Are there differences among application areas that
require different language features?
• How will the language interface support effective
partitioning and distribution?
Gio Wiederhold SimQL 31
8/17/2012 Gio: SimQL
32. Moving to a Service Paradigm
• Server is an independent contractor, defines service
• Client selects service, and specifies parameters
• Server’s success depends on value provided
• Some form of payment received for services
x,y
Databases are a current example.
Simulations have the same potential.
8/17/2012 Gio: SimQL Gio Wiederhold SimQL 32
33. Summary of SimQL
A new service for Decision Making:
• follows database paradigm
– ( by about 25 years )
• coherence in prediction
– displacement of ad-hoc practices
• seamless information integration
– single paradigm for decision makers
• simulation industry infrastructure
– investment has a potential market
– should follows database industry model:
Interfaces promote new industries
8/17/2012 Gio: SimQL Gio Wiederhold SimQL 33
34. Publications
Gio Wiederhold: "Information Systems that Really
Support Decision-making"; 11th International
Symposium on Methodologies for Intelligent Systems
(ISMIS), Warsaw Poland, June 1999, in Ras & Skowron
Foundations for Intelligent Systems, Springer LNAI
1609, pages 56-66
Gio Wiederhold and Rushan Jiang: “Augmenting
Information Systems with Access to Predictive Tools”;
http://infolab.stanford.edu/pub/gio/2000/VLDB2000-1.htm
The specifics of the language as implemented are at
http://www-db.stanford.edu/LIC/SimQL.html
Gio Wiederhold SimQL 34
8/17/2012 Gio: SimQL