4 - SEASONS, è il sistema applicativo progettato appositamente per le società di Moda che permette una gestione mirata, completa ed efficiente di tutti i processi aziendali delle industrie operanti in questo settore. La nuova soluzione di SYS-DAT, è stata sviluppata integrata su SAP Business One e permette una rapida interazione tra le diverse strutture organizzative e soprattutto un facile utilizzo per gli utenti. Progettata per amministrare una o più linee di produzione per target differenti, è multilingua e multiazienda per una totale gestione senza problemi di comunicazione tra la casa madre e le filiali.
SYS-DAT for SAP Business One, l’innovazione tecnologica per la competitività del business nell’ambito di una Business Platform che supera il paradigma delle tradizionali Business Solutions. Quattro aree di sviluppo ed integrazione, il Core, con la copertura funzionale completa e l’integrazione intercompany, il Cloud, per ridurre i costi di gestione, l’In Memory Computing, per prestazioni d’eccellenza ed il Mobile App, per la fruibilità nativa su smart phone e tablet.
Proficy Workflow customer presentation Italian Enzo M. Tieghi
Presentazione in italiano di Proficy Wiorkflow, prodotto software di GE Intelligent Platforms, distribuito da Servitecno, per la guida/istruzione operatore, alarm response management, ecc.
4 - SEASONS, è il sistema applicativo progettato appositamente per le società di Moda che permette una gestione mirata, completa ed efficiente di tutti i processi aziendali delle industrie operanti in questo settore. La nuova soluzione di SYS-DAT, è stata sviluppata integrata su SAP Business One e permette una rapida interazione tra le diverse strutture organizzative e soprattutto un facile utilizzo per gli utenti. Progettata per amministrare una o più linee di produzione per target differenti, è multilingua e multiazienda per una totale gestione senza problemi di comunicazione tra la casa madre e le filiali.
SYS-DAT for SAP Business One, l’innovazione tecnologica per la competitività del business nell’ambito di una Business Platform che supera il paradigma delle tradizionali Business Solutions. Quattro aree di sviluppo ed integrazione, il Core, con la copertura funzionale completa e l’integrazione intercompany, il Cloud, per ridurre i costi di gestione, l’In Memory Computing, per prestazioni d’eccellenza ed il Mobile App, per la fruibilità nativa su smart phone e tablet.
Proficy Workflow customer presentation Italian Enzo M. Tieghi
Presentazione in italiano di Proficy Wiorkflow, prodotto software di GE Intelligent Platforms, distribuito da Servitecno, per la guida/istruzione operatore, alarm response management, ecc.
Verifica del riposizionamento dello yogurt alla greca di Danone. Ricerca di marketing effettuata per il corso "Strumenti e indagini per le organizzazioni e i mercati", proff. M. Ivaldi e M. Miglioretti.
Mobile, BPM e Cloud tramite MDD: Una leva tecnologica per il business [ITA]Marco Brambilla
Questo workshop descrive un approccio basato su progettazione model-driven del software per lo sviluppo di applicazioni Web e mobile in ambito enterprise, supportando anche l'integrazione all'interno dei processi di business aziendali.
Il workshop mostra come sia possibile affrontare con tecniche agili le problematiche di definizione e ristrutturazione di processi aziendali, ottenendo in tempi rapidi prototipi funzionanti e installazioni finali delle applicazioni, anche a fronte di esigenze di flessibilità e continua evoluzione dei requisiti.
A tale scopo, il deployment su cloud si mostra vincente, garantendo la massima flessibilità di progettazione e installazione. Il workshop mostra come le tecniche model-driven facilitano l'accesso alla cloud.
A dimostrazione di questi concetti si mostreranno casi di studio concreti e l'uso del tool WebRatio, uno strumento innovativo che consente la progettazione agile e basato su modelli per processi aziendali, front-end e funzionalità che coprono anche integrazioni con piattaforme SOA, BPM, Mobile, e Cloud.
From Conceptual to Executable BPMN Process Models A Step-by-Step MethodMarlon Dumas
Step-by-step tutorial showing how to turn BPMN process models designed by business analysts into executable processes deployable in a Business Process Management System. This tutorial was first given at the 11th International Conference on Business Process Management in Beijing, China on 29 August 2013. The tutorial is part of a series of lectures available at http://fundamentals-of-bpm.org
TRS one of the testimonials at the Emerasoft Day - 23 May 2012.
Polarion has been chosen as the ALM tool in TRS (www.trs.it) with different modules integrated:
SCRUM, waterfall lifecycle, earned value analisys, integrated planning
Titolo intervento: Uso estensivo di Polarion
Dal CMMi ML3 al Business Process Management
Il caso di successo presentato da TRS al Polarion User Conference del 2010 ad Arezzo.
In esclusiva un estratto del materiale formativo del corso di project management base.
Per maggiori informazioni o per scaricare il file pdf visita il sito: www.frprojects.com
Scopri come rendere efficienti e agili i tuoi processi aziendali.
Il webinar introdurrà i concetti base del Business Process Management e presenterà uno degli strumenti in grado di modellare ed eseguire i propri processi di business:
Bonita Open Solution.
Agenda:
Agenda:
* BPM e BPMS: definizioni e differenze
* Descrizione di BonitaSoft
* Il tool "Bonita Open Solution":
* Bonita Studio
* Bonita Engine
* Bonita User Experience (XP)
* Funzionalità e key features di BOS
* Demo di Bonita Open Solution
(creazione ed esecuzione di un processo)
Per maggiori informazioni: contact@profesia.it
Biznology è una società di ingegneria informatica che offre servizi di consulenza e supporto per la progettazione e lo sviluppo di applicazioni per sistemi informativi di differenti dimensioni e complessità.
L’approccio metodologico e la profonda conoscenza delle architetture IT caratterizza tutte le attività di consulenza dei professionisti che operano con Biznology.
Biznology nasce dalla fusione delle parole business e technology.
Noi riteniamo che i risultati veramente apprezzabili in ambito ICT si possano ottenere solo se si riesce ad avere una completa visione d’insieme che coniuga la prospettiva del business con le tecnologie ed il governo delle infrastrutture IT.
Proponiamo soluzioni ed interventi il cui successo si basa fondamentalmente sulla capacità di individuare le vere esigenze di business e sul creare valore attraverso l’uso delle giuste tecnologie ed il miglioramento dei sistemi informativi.
Biznology, precedentemente Master Reseller per l’Italia della soluzione ASF, si posiziona ora come divisione focalizzata su problematiche di application integration nel perimetro delle attività Talend Italia ed affianca la distribuzione di questi prodotti ai già citati servizi di consulenza in ambito Project/Program Management, IT Governance, Enterprise Architecture e supporto per la progettazione e lo sviluppo di sistemi informativi aziendali.
Process and Service Modeling Analysis - Presenation (ITA)Matteo Stabile
Project dealing with modeling and implementation of a process about scientific missions in the Dipartimento di Ingegneria Informatica of University "La Sapienza" of Rome. Used technologies and languages: Bizagi, Java, LaTex, Power Point, NetBeans, BPMN, UML.
Microsoft SharePoint: la piattaforma abilitanteDOCFLOW
Intervento di Silvio Filippi, Product Manager SharePoint, durante in convegno organizzato da DocFlow e Microsoft dal titolo "Microsoft SharePoint, SAP e DocFlow: efficienza in tutte le aree aziendali"
Hierarchical Transformers for User Semantic Similarity - ICWE 2023Marco Brambilla
We discuss the use of hierarchical transformers for user semantic similarity in the context of analyzing users' behavior and profiling social media users. The objectives of the research include finding the best model for computing semantic user similarity, exploring the use of transformer-based models, and evaluating whether the embeddings reflect the desired similarity concept and can be used for other tasks.
We use a large dataset of Twitter users and apply an automatic labeling approach. The dataset consists of English tweets posted in November and December 2020, totaling about 27GB of compressed data. Preprocessing steps include filtering out short texts, cleaning user connections, and selecting a benchmark set of users for evaluation.
Since Transformer architectures are known to work well on short text, we cannot use them on extensive collections of tweets describing the activity of a user. Therefore, we propose a hierarchical structure of transformer models to be used first on tweets and then on their aggregations.
The models used in the study include hierarchical transformers, and the tweet embeddings are obtained using four Transformer-based models: RoBERTa2, BERTweet3, Sentence BERT4, and Twitter4SSE5. The researchers test different techniques for processing tweet embeddings to generate accurate user embeddings, including mean pooling, recurrence over BERT (RoBERT), and transformer over BERT (ToBERT).
The evaluation of the models is done on a set of 5,000 users, comparing user similarities with 30 other candidate users, 5 of which are considered similar and 25 considered dissimilar. The evaluation metrics used include mean average precision (MAP), mean reciprocal rank (MRR) at 10, and normalized discounted cumulative gain (nDCG).
The optimization process involves selecting a loss function and using the AdamW optimizer with specific hyperparameters. The results show that the hierarchical approach with a Stage-1 Twitter4SSE model and a Stage-2 Transformer model performs the best among the alternatives.
In conclusion, the research provides a large unbiased dataset for user similarity analysis, presents a hierarchical language model optimized for accurate user similarity computation, and validates the models' performance on similarity tasks, with potential applications to related problems.
The future work includes investigating the impact of time and topic drift on the models' performance.
Exploring the Bi-verse.A trip across the digital and physical ecospheresMarco Brambilla
The Web and social media are the environments where people post their content, opinions, activities, and resources. Therefore, a considerable amount of user-generated content is produced every day for a wide variety of purposes. On the other side, people live their everyday life immersed in the physical world, where society, economy, politics and personal relations continuously evolve. These two opposite and complementary environment are today fully integrated: they reflect each other and they interact with each other in a stronger and stronger way.
Exploring and studying content and data coming from both environments offers a great opportunity to understand the ever evolving modern society, in terms of topics of interest, events, relations, and behaviour.
In this speech I will discuss through business cases and socio-political scenarios how we can extract insights and understand reality by combining and analyzing data from the digital and physical world, so as to reach a better overall picture of reality itself. Along this path, we need to keep into account that reality is complex and varies in time, space and along many other dimensions, including societal and economic variables. The speech highlights the main challenges that need to be addressed and outlines some data science strategies that can be applied to tackle these specific challenges.
This slide deck has been presented as a keynote speech at WISE 2022 in Biarritz, France.
In online social media platforms, users can express their ideas by posting original content or by adding comments and responses to existing posts, thus generating virtual discussions and conversations. Studying these conversations is essential for understanding the online communication behavior of users. This study proposes a novel approach to retrieve popular patterns on online conversations using network-based analysis. The analysis consists of two main stages: intent analysis and network generation. Users’ intention is detected using keyword-based categorization of posts and comments, integrated with classification through Naïve Bayes and Support Vector Machine algorithms for uncategorized comments. A continuous human-in-the-loop approach further improves the keyword-based classification. To build and understand communication patterns among the users, we build conversation graphs starting from the hierarchical structure of posts and comments, using a directed multigraph network. The experiments categorize 90% comments with 98% accuracy on a real social media dataset. The model then identifies relevant patterns in terms of shape and content; and finally determines the relevance and frequency of the patterns. Results show that the most popular online discussion patterns obtained from conversation graphs resemble real-life interactions and communication.
Trigger.eu: Cocteau game for policy making - introduction and demoMarco Brambilla
COCTEAU stands for "Co-Creating the European Union".
It's a project supported by the European Union whose objective is to involve citizens to cooperate alongside policy makers, contributing to build a better future.
Generation of Realistic Navigation Paths for Web Site Testing using RNNs and ...Marco Brambilla
A large audience of users and typically a long time frame are needed to produce sensible and useful log data, making it an expensive task.
To address this limit, we propose a method that focuses on the generation of REALISTIC NAVIGATIONAL PATHS, i.e., web logs .
Our approach is extremely relevant because it can at the same time tackle the problem of lack of publicly available data about web navigation logs, and also be adopted in industry for AUTOMATIC GENERATION OF REALISTIC TEST SETTINGS of Web sites yet to be deployed.
The generation has been implemented using deep learning methods for generating more realistic navigation activities, namely
Recurrent Neural Network, which are very well suited to temporally evolving data
Generative Adversarial Network: neural networks aimed at generating new data, such as images or text, very similar to the original ones and sometimes indistinguishable from them, that have become increasingly popular in recent years.
We run experiments using open data sets of weblogs as training, and we run tests for assessing the performance of the methods. Results in generating new weblog data are quite good with respect to the two evaluation metrics adopted (BLEU and Human evaluation).
Our study is described in detail in the paper published at ICWE 2020 – International Conference on Web Engineering with DOI: 10.1007/978-3-030-50578-3. It’s available online on the Springer Web site.
Analyzing rich club behavior in open source projectsMarco Brambilla
The network of collaborations in an open source project can reveal relevant emergent properties that influence its prospects of success.
In this work, we analyze open source projects to determine whether they exhibit a rich-club behavior, i.e., a phenomenon where contributors with a high number of collaborations (i.e., strongly connected within the collaboration network)
are likely to cooperate with other well-connected individuals. The presence or absence of a rich-club has an impact on the sustainability and robustness of the project.
For this analysis, we build and study a dataset with the 100 most popular projects in GitHub, exploiting connectivity patterns in the graph structure of collaborations that arise from commits, issues and pull requests. Results show that rich-club behavior is present in all the projects, but only few of them have an evident club structure. We compute coefficients both for single source graphs and the overall interaction graph, showing that rich-club behavior varies across different layers of software development. We provide possible explanations of our results, as well as implications for further analysis.
Analysis of On-line Debate on Long-Running Political Phenomena.The Brexit C...Marco Brambilla
In this study, we demonstrate that the computational social science is important to understand people behavior in political phenomena, and based on the long-running Brexit debate analysis on Twitter, we predict the public stance, discussion topics, and we measure the involvement of automated accounts and politicians’ social media accounts.
Community analysis using graph representation learning on social networksMarco Brambilla
In a world more and more connected, new and complex interaction
patterns can be extracted in the communication between people.
This is extremely valuable for brands that can better understand
the interests of users and the trends on social media to better target
their products. In this paper, we aim to analyze the communities
that arise around commercial brands on social networks to understand
the meaning of similarity, collaboration, and interaction
among users.We exploit the network that builds around the brands
by encoding it into a graph model.We build a social network graph,
considering user nodes and friendship relations; then we compare
it with a heterogeneous graph model, where also posts and hashtags
are considered as nodes and connected to the different node
types; we finally build also a reduced network, generated by inducing
direct user-to-user connections through the intermediate
nodes (posts and hashtags). These different variants are encoded
using graph representation learning, which generates a numerical
vector for each node. Machine learning techniques are applied to
these vectors to extract valuable insights for each user and for the
communities they belong to. In the paper, we report on our experiments
performed on an emerging fashion brand on Instagram, and
we show that our approach is able to discriminate potential customers
for the brand, and to highlight meaningful sub-communities
composed by users that share the same kind of content on social
networks.
Data Cleaning for social media knowledge extractionMarco Brambilla
Social media platforms let users share their opinions through textual or multimedia content. In many settings, this becomes a valuable source of knowledge that can be exploited for specific business objectives. Brands and companies often ask to monitor social media as sources for understanding the stance, opinion, and sentiment of their customers, audience and potential audience. This is crucial for them because it let them understand the trends and future commercial and marketing opportunities.
However, all this relies on a solid and reliable data collection phase, that grants that all the analyses, extractions and predictions are applied on clean, solid and focused data. Indeed, the typical topic-based collection of social media content performed through keyword-based search typically entails very noisy results.
We recently implemented a simple study aiming at cleaning the data collected from social content, within specific domains or related to given topics of interest. We propose a basic method for data cleaning and removal of off-topic content based on supervised machine learning techniques, i.e. classification, over data collected from social media platforms based on keywords regarding a specific topic. We define a general method for this and then we validate it through an experiment of data extraction from Twitter, with respect to a set of famous cultural institutions in Italy, including theaters, museums, and other venues.
For this case, we collaborated with domain experts to label the dataset, and then we evaluated and compared the performance of classifiers that are trained with different feature extraction strategies.
Iterative knowledge extraction from social networks. The Web Conference 2018Marco Brambilla
Knowledge in the world continuously evolves, and ontologies are largely incomplete, especially regarding data belonging to the so-called long tail. We propose a method for discovering emerging knowledge by extracting it from social content. Once initialized by domain experts, the method is capable of finding relevant entities by means of a mixed syntactic-semantic method. The method uses seeds, i.e. prototypes of emerging entities provided by experts, for generating candidates; then, it associates candidates to feature vectors built by using terms occurring in their social content and ranks the candidates by using their distance from the centroid of seeds, returning the top candidates. Our method can run iteratively, using the results as new seeds.
In this paper we address the following research questions: (1) How does the reconstructed domain knowledge evolve if the candidates of one extraction are recursively used as seeds (2) How does the reconstructed domain knowledge spread geographically (3) Can the method be used to inspect the past, present, and future of knowledge (4) Can the method be used to find emerging knowledge?.
This work was presented at The Web Conference 2018, MSM workshop.
Driving Style and Behavior Analysis based on Trip Segmentation over GPS Info...Marco Brambilla
Over one billion cars interact with each other on the road every day. Each driver has his own driving style, which could impact safety, fuel economy and road congestion. Knowledge about the driving style of the driver could be used to encourage ``better" driving behaviour through immediate feedback
while driving, or by scaling auto insurance rates based on the aggressiveness of the driving style.
In this work we report on our study of driving behaviour profiling based on unsupervised data mining methods. The main goal is to detect the different driving behaviours, and thus to cluster drivers with similar behaviour.
This paves the way to new business models related to the driving sector, such as Pay-How-You-Drive insurance
policies and car rentals.
Driver behavioral characteristics are studied by collecting information from GPS sensors on the cars and by applying three different analysis approaches (DP-means, Hidden Markov Models, and Behavioural Topic Extraction) to the contextual scene detection problems on car trips, in order to detect different
behaviour along each trip. Subsequently, drivers are clustered in similar profiles based on that and the results are compared with a human-defined groundtruth on drivers classification. The proposed framework is tested on a real dataset containing sampled car signals. While the different approaches show relevant differences in trip segment classification, the coherence of the final driver clustering results is surprisingly high.
Myths and challenges in knowledge extraction and analysis from human-generate...Marco Brambilla
For centuries, science (in German "Wissenschaft") has aimed to create ("schaften") new knowledge ("Wissen") from the observation of physical phenomena, their modelling, and empirical validation. Recently, a new source of knowledge has emerged: not (only) the physical world any more, but the virtual world, namely the Web with its ever-growing stream of data materialized in the form of social network chattering, content produced on demand by crowds of people, messages exchanged among interlinked devices in the Internet of Things. The knowledge we may find there can be dispersed, informal, contradicting, unsubstantiated and ephemeral today, while already tomorrow it may be commonly accepted. The challenge is once again to capture and create knowledge that is new, has not been formalized yet in existing knowledge bases, and is buried inside a big, moving target (the live stream of online data). The myth is that existing tools (spanning fields like semantic web, machine learning, statistics, NLP, and so on) suffice to the objective. While this may still be far from true, some existing approaches are actually addressing the problem and provide preliminary insights into the possibilities that successful attempts may lead to.
The talk explores the mixed realistic-utopian domain of knowledge extraction and reports on some tools and cases where digital and physical world have brought together for better understanding our society.
Harvesting Knowledge from Social Networks: Extracting Typed Relationships amo...Marco Brambilla
Knowledge bases like DBpedia, Yago or Google's Knowledge
Graph contain huge amounts of ontological knowledge harvested from
(semi-)structured, curated data sources, such as relational databases or
XML and HTML documents. Yet, the Web is full of knowledge that is
not curated and/or structured and, hence, not easily indexed, for ex-
ample social data. Most work so far in this context has been dedicated
to the extraction of entities, i.e., people, things or concepts. This poster
describes our work toward the extraction of relationships among entities.
The objective is reconstructing a typed graph of entities and relation-
ships to represent the knowledge contained in social data, without the
need for a-priori domain knowledge. The experiments with real datasets
show promising performance across a variety of domains.
The key distinguishing
feature of the work is its focus on highly unstructured social data (tweets and
Facebook posts) without reliable grammar structures. Traditional relation extraction approaches supervised , semi-supervised or unsupervised,
commonly assume the availability of grammatically correct language corpora.
Model-driven Development of User Interfaces for IoT via Domain-specific Comp...Marco Brambilla
Internet of Things technologies and applications are evolving and continuously gaining traction in all fields and environments, including homes, cities, services, industry and commercial enterprises. However, still many problems need to be addressed. For instance, the
IoT vision is mainly focused on the technological and infrastructure aspect, and on the management and analysis of the huge amount of generated data, while so far the development of front-end and user interfaces for
IoT has not played a relevant role in research. On the contrary, user interfaces in the IoT ecosystem they can play a key role in the acceptance of solutions by final adopters. In this paper we present a model-driven approach to the design of IoT interfaces, by defining a specific visual design language and design patterns for IoT\ applications, and we show them at work. The language we propose is defined as an extension of the OMG standard language called IFML.
A Model-Based Method for Seamless Web and Mobile Experience. Splash 2016 conf.Marco Brambilla
Consumer-centered software applications nowadays are required
to be available both as mobile and desktop versions.
However, the app design is frequently made only for one of
the two (i.e., mobile first or web first) while missing an appropriate
design for the other (which, in turn, simply mimics
the interaction of the first one). This results into poor quality
of the interaction on one or the other platform. Current solutions
would require different designs, to be realized through
different design methods and tools, and that may require to
double development and maintenance costs.
In order to mitigate such an issue, this paper proposes a
novel approach that supports the design of both web and mobile
applications at once. Starting from a unique requirement
and business specification, where web– and mobile–specific
aspects are captured through tagging, we derive a platform independent
design of the system specified in IFML. This
model is subsequently refined and detailed for the two platforms,
and used to automatically generate both the web and
mobile versions. If more precise interactions are needed for
the mobile part, a blending with MobML, a mobile-specific
modeling language, is devised. Full traceability of the relations
between artifacts is granted.
The Web Science course focuses on the study of large-scale socio-technical systems associated with the World Wide Web. It considers the relationship between people and technology, the ways that society and technology complement one another and the way they impact on broader society. These analyses are inherently associated with Big Data management issues.
The course is organised in four parts.
1. Syntax
In the first part, the course introduces the basis of content analysis. If focuses on the syntactic aspects, covering the fundamentals of natural language processing and text mining. It describes the structure and typical characteristics of the different web sources, spanning search results, social media contents, social network structures, Web APIs, and so on. It also provides an overview of the basic Web analysis techniques applied in Web search and Web recommendation.
2. Semantics
In the second part, the course presents semantic technologies. These technologies are very important nowadays because they allow to treat the "variety" dimension of Big Data, i.e., they enable integration of multiple and diverse sources of information, which is typical on the modern Web platform. Covered topics include:
- RDF - a flexible data model to represent heterogeneous data
- OWL - a flexible ontological language to model heterogeneous data sources
- SPARQL - a query language for RDF.
It shows how to put all the pieces together in order to achieve interoperability among heterogeneous information sources
3. Time
The third part covers the realm of temporal-dependent data. The topics covered here allow to treat the "velocity" dimension of Big Data. It shows the importance for many Big Data analysis scenarios to process data stream, coming for instance from Internet of Things (IoT) and Social Media sources; and it describes how to apply semantic and syntactic techniques in the context of time-dependent information. For instance, it shows how to extend RDF to model RDF streams, how to extend SPARQL to continuously process RDF streams and how to reason on those RDF Streams
4. Applications
In the fourth part, the course focuses on specific application scenarios and presents the typical settings and problems where the presented techniques can be applied. This part discusses settings such as: big data analysis for smart cities; data analytics for brand monitoring (marketing) and event monitoring; data analysis for trend detection and user engagement; and so on.
Business process modeling and automatic management
1. Business Process Management
Presentazione a SMAU 2009
BPM Automation
metodi e strumenti per la gestione ottimizzata
dei processi aziendali
Marco Brambilla
Politecnico di Milano, ICT Institute
marco.brambilla@polimi.it
http://home.dei.polimi.it/mbrambil/
2. 2
Agenda
Business Process Management: motivazioni e concetti
cenni storici e trend attuale
Lo standard OMG BPMN: versioni 1.2 e 2.0
Gli strumenti di supporto
il mercato, classificazione funzionale
Un caso di studio pratico
il tool WebRatio BPM
un progetto nel settore finance / leasing
SMAU 2009 Marco Brambilla
3. Storia e trend 3
1980 1984 1990
1995 2000 2005
Concetto di FileNet e
Workflow WorkFlo
Rules, modeling,
Business process monitoring, BPM
1985
Integration optimization
Enterprise Application SOA
Integration (EAI) 2000 +BPMN
Web
Services
Origini: Integrazione di esperienze molto diverse
Drivers: Aspetti di business e tecnologici
SMAU 2009 Marco Brambilla
4. BPM oggi: fasi, obiettivi e sfide 4
Il ciclo di vita Design
BP Design
BP Modeling
Optimization Modeling
BP Execution
BP Monitoring (BAM)
BP Optimization
Obiettivi Monitoring Execution
Integrazione di applicazioni
Web services
SOA (Service Oriented Architectures)
Linguaggi di orchestrazione (es. BPEL)
Evoluzione continua dei processi
Processo di sviluppo virtuoso
SMAU 2009 Marco Brambilla
6. I concetti di BPMN
Name Activity: unità di lavoro
Subprocess: attività che può essere a sua volta
Name scomposta in un sottoprocesso
Pool: rappresenta un partecipante
Lane: partizione di una pool, per scopi vari
SMAU 2009 Marco Brambilla
7. Simboli BPMN – Eventi e flussi
EVENTI - FLOW DIMENSION: EVENTI - TYPE DIMENSION:
Start
(avvio di un processo)
End
(conclusione di processo)
Intermediate (evento nel
corso del processo)
FLUSSI:
Control/ sequence flow:
sequenza di esecuzione
Default flow: in presenza di più
scelte, flusso scelto di default
Message flow
Conditional control/sequence flow:
seguito se la condizione specificata
all’inizio del flow è verificata
Association: associazione o
flusso di data-objects
SMAU 2009 Marco Brambilla
8. Simboli BPMN – Gateway e cicli
Loop Activity Loop: implementa cicli
while e until su singola activity
Loop Activity Multiple Instance Loop:
cicli for-each su singola activity
A2
cond
Cycle: ciclo esplicito
con uso di gateway
A3
SMAU 2009 Marco Brambilla
10. Verso BPMN 2.0 – le novità 10
Relazione tra modelli: diversi diagrammi per lo stesso processo,
diverse prospettive ma consistenti
Non-interrupting events: per scatenare azioni su evento, senza
interrompere il flusso corrente
Escalation events: per segnalare un evento da parte di un utente
Business rule task: per invocare regole di business
Conversation diagrams e choreography diagrams: nuovi tipi di
diagrammi
Allineamento con BPDM: business process definition meta model,
per un linguaggio unico e consistente
Standard XML schema: per l’interscambio di BPMN models
SMAU 2009 Marco Brambilla
11. Gli strumenti 11
Oltre 50 prodotti per BPM che supportano BPMN
Gartner magic quadrant
Differenti target:
Analisti (Billfish BPM, BizAgi)
Sviluppatori (TIBCO)
Analisti-sviluppatori (Oracle, IBM)
SMAU 2009 Marco Brambilla
12. Gli strumenti 12
Differenti target:
Analisti (Billfish BPM, BizAgi)
Sviluppatori (TIBCO)
Analisti-sviluppatori (Oracle, IBM)
SMAU 2009 Marco Brambilla
13. Gli strumenti 13
Differenti target:
Sviluppatori (TIBCO)
Analisti (Billfish BPM, BizAgi)
Analisti-sviluppatori (Oracle, IBM)
SMAU 2009 Marco Brambilla
14. Gli strumenti 14
Differenti target:
Analisti-sviluppatori (Oracle, IBM)
Analisti (Billfish BPM, BizAgi)
Sviluppatori (TIBCO)
SMAU 2009 Marco Brambilla
15. Gli strumenti 15
Interoperabilità teorica e pratica
Differenti caratteristiche:
Facilità di modellazione (BizAgi, Oracle)
Copertura BPMN (TIBCO, Intalio)
Simulazione (IBM WebSphere Business Modeler)
BAM - analisi di business (BizAgi, Oracle)
Integrazione sorgenti dati (DB, appl web, sist informativo) (IBM,
TIBCO, WebRatio)
Prototipazione (WebRatio BPM, Billfish BPM, Oracle)
Generazione e personalizzabilità interfaccia (form, visual
identity, ...) (BizAgi, WebRatio)
SMAU 2009 Marco Brambilla
16. Gli strumenti 16
Differenti caratteristiche:
Facilità di modellazione (BizAgi, Oracle)
SMAU 2009 Marco Brambilla
17. Gli strumenti 17
Differenti caratteristiche:
Copertura BPMN (TIBCO, Intalio)
SMAU 2009 Marco Brambilla
18. Gli strumenti 18
Differenti caratteristiche:
Simulazione (IBM WebSphere Business Modeler)
SMAU 2009 Marco Brambilla
19. Gli strumenti 19
Differenti caratteristiche:
BAM - analisi di business (BizAgi, Oracle)
SMAU 2009 Marco Brambilla
20. Gli strumenti 20
Differenti caratteristiche:
Integrazione sorgenti dati (DB, appl web, sist informativo) (IBM,
TIBCO, WebRatio)
Prototipazione (WebRatio BPM, Billfish BPM, Oracle)
Generazione e personalizzabilità interfaccia (form, visual
identity, ...) (BizAgi, WebRatio)
SMAU 2009 Marco Brambilla