This document discusses the reorganization of a development team at Cisco WebEx. It proposes separating front-end developers (F2E) and server-side developers (SDE) into different teams for improved productivity, quality, and skill development. It also recommends establishing explicit requirements, change control processes, and an agreed-upon data interface specification to facilitate separation and cooperation between the F2E and SDE teams.
Database basics for new-ish developers -- All Things Open October 18th 2021Dave Stokes
Do you wonder why it takes your database to find the top five of your fifty six million customers? Do you really have a good idea of what NULL is and how to use it? And why are some database queries so quick and others frustratingly slow? Relational databases have been around for over fifty years and frustrating developers for at least forty nine of those years. This session is an attempt to explain why sometimes the database seems very fast and other times not. You will learn how to set up data (normalization) to avoid redundancies into tables by their function, how to join two tables to combine data, and why Structured Query Language is so very different than most other languages. And you will see how thinking in sets over records can greatly improve your life with a database.
In its 3.2 and 3.3 generations, the Spring Framework focuses on core features for asynchronous processing and message-oriented architectures, as well as enhancements to its caching support and its language support. The Spring Framework project also comes with a new Gradle-based build and a new GitHub-based contribution model. In this session, we'll discuss key features in this year's Spring 3.2 and next year's Spring 4.0, including support for upcoming standards such as JCache, websockets, JMS 2.0, and not least of all Java 8's language features.
Utilization of zend an ultimate alternate for intense data processingCareer at Elsner
Normally, you can
write raw php/mysql functionality for your requirements, but if you wish to keep your code
clean and reusable, using Zend functionality is the way. Magento development company use
the Zend framework.
This is a talk I presented at University Limerick to give people an introduction into CouchDB.
What is it? How does it generally work? Introducing new concepts, etc.
A Step by Step Introduction to the MySQL Document StoreDave Stokes
Looking for a fast, flexible NoSQL document store? And one that runs with the power and reliability of MySQL. This is an intro on how to use the MySQL Document Store
Database basics for new-ish developers -- All Things Open October 18th 2021Dave Stokes
Do you wonder why it takes your database to find the top five of your fifty six million customers? Do you really have a good idea of what NULL is and how to use it? And why are some database queries so quick and others frustratingly slow? Relational databases have been around for over fifty years and frustrating developers for at least forty nine of those years. This session is an attempt to explain why sometimes the database seems very fast and other times not. You will learn how to set up data (normalization) to avoid redundancies into tables by their function, how to join two tables to combine data, and why Structured Query Language is so very different than most other languages. And you will see how thinking in sets over records can greatly improve your life with a database.
In its 3.2 and 3.3 generations, the Spring Framework focuses on core features for asynchronous processing and message-oriented architectures, as well as enhancements to its caching support and its language support. The Spring Framework project also comes with a new Gradle-based build and a new GitHub-based contribution model. In this session, we'll discuss key features in this year's Spring 3.2 and next year's Spring 4.0, including support for upcoming standards such as JCache, websockets, JMS 2.0, and not least of all Java 8's language features.
Utilization of zend an ultimate alternate for intense data processingCareer at Elsner
Normally, you can
write raw php/mysql functionality for your requirements, but if you wish to keep your code
clean and reusable, using Zend functionality is the way. Magento development company use
the Zend framework.
This is a talk I presented at University Limerick to give people an introduction into CouchDB.
What is it? How does it generally work? Introducing new concepts, etc.
A Step by Step Introduction to the MySQL Document StoreDave Stokes
Looking for a fast, flexible NoSQL document store? And one that runs with the power and reliability of MySQL. This is an intro on how to use the MySQL Document Store
2010 Software Licensing and Pricing Survey Results and 2011 PredictionsFlexera
2010 Software Licensing and Pricing Survey Results and 2011 Predictions by Amy Konary, Director, Software Pricing and Licensing, IDC
Presented at SoftSummit 2010
Data Seeding via Parameterized API RequestsRapidValue
A quick guide on how to data seed via parameterized API requests. Parameterization is very important for automation testing. It helps you to iterate on input data with multiple data sets that make your scripts reusable and maintainable. In few scenarios, you can still manage with hard coded request but the same approach will not work out where sheer count of combinations is to be validated. By implementing the right solution, you can keep your code base and test data size at ideal range and still savor the benefits of optimal coverage.
A quick introduction to node.js in order to have good basics to build a simple website.
This slide covers:
- node.js (you don't say?)
- express
- jade
- mongoDB
- mongoose
Full Stack Development With Node.Js And NoSQL (Nic Raboy & Arun Gupta)Red Hat Developers
In this session, we'll talk about what's different about this generation of web applications and how a solid development approach must consider the latency, throughput, and interactivity demand by users across mobile devices, web browsers, and Internet of Things (IoT). We'll demonstrate how to include Couchbase in such applications to support a flexible data model and the easy scalability required for modern development. We'ill demonstrate how to create a full stack application focusing on the CEAN stack, which is composed of Couchbase, Express Framework, AngularJS, and Node.js.
2010 Software Licensing and Pricing Survey Results and 2011 PredictionsFlexera
2010 Software Licensing and Pricing Survey Results and 2011 Predictions by Amy Konary, Director, Software Pricing and Licensing, IDC
Presented at SoftSummit 2010
Data Seeding via Parameterized API RequestsRapidValue
A quick guide on how to data seed via parameterized API requests. Parameterization is very important for automation testing. It helps you to iterate on input data with multiple data sets that make your scripts reusable and maintainable. In few scenarios, you can still manage with hard coded request but the same approach will not work out where sheer count of combinations is to be validated. By implementing the right solution, you can keep your code base and test data size at ideal range and still savor the benefits of optimal coverage.
A quick introduction to node.js in order to have good basics to build a simple website.
This slide covers:
- node.js (you don't say?)
- express
- jade
- mongoDB
- mongoose
Full Stack Development With Node.Js And NoSQL (Nic Raboy & Arun Gupta)Red Hat Developers
In this session, we'll talk about what's different about this generation of web applications and how a solid development approach must consider the latency, throughput, and interactivity demand by users across mobile devices, web browsers, and Internet of Things (IoT). We'll demonstrate how to include Couchbase in such applications to support a flexible data model and the easy scalability required for modern development. We'ill demonstrate how to create a full stack application focusing on the CEAN stack, which is composed of Couchbase, Express Framework, AngularJS, and Node.js.
Agile Data Science 2.0 (O'Reilly 2017) defines a methodology and a software stack with which to apply the methods. *The methodology* seeks to deliver data products in short sprints by going meta and putting the focus on the applied research process itself. *The stack* is but an example of one meeting the requirements that it be utterly scalable and utterly efficient in use by application developers as well as data engineers. It includes everything needed to build a full-blown predictive system: Apache Spark, Apache Kafka, Apache Incubating Airflow, MongoDB, ElasticSearch, Apache Parquet, Python/Flask, JQuery. This talk will cover the full lifecycle of large data application development and will show how to use lessons from agile software engineering to apply data science using this full-stack to build better analytics applications. The entire lifecycle of big data application development is discussed. The system starts with plumbing, moving on to data tables, charts and search, through interactive reports, and building towards predictions in both batch and realtime (and defining the role for both), the deployment of predictive systems and how to iteratively improve predictions that prove valuable.
Agile Data Science 2.0 (O'Reilly 2017) defines a methodology and a software stack with which to apply the methods. *The methodology* seeks to deliver data products in short sprints by going meta and putting the focus on the applied research process itself. *The stack* is but an example of one meeting the requirements that it be utterly scalable and utterly efficient in use by application developers as well as data engineers. It includes everything needed to build a full-blown predictive system: Apache Spark, Apache Kafka, Apache Incubating Airflow, MongoDB, ElasticSearch, Apache Parquet, Python/Flask, JQuery. This talk will cover the full lifecycle of large data application development and will show how to use lessons from agile software engineering to apply data science using this full-stack to build better analytics applications. The entire lifecycle of big data application development is discussed. The system starts with plumbing, moving on to data tables, charts and search, through interactive reports, and building towards predictions in both batch and realtime (and defining the role for both), the deployment of predictive systems and how to iteratively improve predictions that prove valuable.
Learning To Run - XPages for Lotus Notes Client DevelopersKathy Brown
You’re an experienced Lotus Notes developer. You’ve been doing “classic” development for years. You know LotusScript better than your native language. You know @Formula like the back of your hand. But when it comes to Xpages and Javascript, you feel like you’re learning to walk all over again. This session will cover some tips and tricks to get you up and running in Xpages. Learn how to translate what you already know, into what you need to know for Xpages. Find out where to get the information to be just as skillful at Xpages as you are with Notes client development.
The outline of the presentation (presented at NDC 2011, Oslo, Norway):
- Short summary of OData evolution and current state
- Quick presentation of tools used to build and test OData services and clients (Visual Studio, LinqPad, Fiddler)
- Definition of canonical REST service, conformance of DataService-based implementation
- Updateable OData services
- Sharing single conceptual data model between databases from different vendors
- OData services without Entity Framework (NHibernate, custom data provider)
- Practical tips (logging, WCF binding, deployment)
Solutions for bi-directional Integration between Oracle RDMBS & Apache KafkaGuido Schmutz
A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Today’s enterprises have their core systems often implemented on top of relational databases, such as the Oracle RDBMS. Implementing a new solution supporting the digital strategy using Kafka and the ecosystem can not always be done completely separate from the traditional legacy solutions. Often streaming data has to be enriched with state data which is held in an RDBMS of a legacy application. It’s important to cache this data in the stream processing solution, so that It can be efficiently joined to the data stream. But how do we make sure that the cache is kept up-to-date, if the source data changes? We can either poll for changes from Kafka using Kafka Connect or let the RDBMS push the data changes to Kafka. But what about writing data back to the legacy application, i.e. an anomaly is detected inside the stream processing solution which should trigger an action inside the legacy application. Using Kafka Connect we can write to a database table or view, which could trigger the action. But this not always the best option. If you have an Oracle RDBMS, there are many other ways to integrate the database with Kafka, such as Advanced Queueing (message broker in the database), CDC through Golden Gate or Debezium, Oracle REST Database Service (ORDS) and more. In this session, we present various blueprints for integrating an Oracle RDBMS with Apache Kafka in both directions and discuss how these blueprints can be implemented using the products mentioned before.
Solutions for bi-directional integration between Oracle RDBMS and Apache Kafk...confluent
A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Today's enterprises have their core systems often implemented on top of relational databases, such as the Oracle RDBMS. Implementing a new solution supporting the digital strategy using Kafka and the ecosystem can not always be done completely separate from the traditional legacy solutions. Often streaming data has to be enriched with state data which is held in an RDBMS of a legacy application. It's important to cache this data in the stream processing solution, so that It can be efficiently joined to the data stream. But how do we make sure that the cache is kept up-to-date, if the source data changes? We can either poll for changes from Kafka using Kafka Connect or let the RDBMS push the data changes to Kafka. But what about writing data back to the legacy application, i.e. an anomaly is detected inside the stream processing solution which should trigger an action inside the legacy application. Using Kafka Connect we can write to a database table or view, which could trigger the action. But this not always the best option. If you have an Oracle RDBMS, there are many other ways to integrate the database with Kafka, such as Advanced Queueing (message broker in the database), CDC through Golden Gate or Debezium, Oracle REST Database Service (ORDS) and more. In this session, we present various blueprints for integrating an Oracle RDBMS with Apache Kafka in both directions and discuss how these blueprints can be implemented using the products mentioned before.
Solutions for bi-directional integration between Oracle RDBMS & Apache KafkaGuido Schmutz
Apache Kafka is a popular distributed streaming data platform. A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Data sources flowing into Kafka are often native data streams such as social media streams, telemetry data, financial transactions and many others. But these data stream only contain part of the information. A lot of data necessary in stream processing is stored in traditional systems backed by relational databases. To implement new and modern, real-time solutions, an up-to-date view of that information is needed. So how do we make sure that information can flow between the RDBMS and Kafka, so that changes are available in Kafka as soon as possible in near-real-time? This session will present different approaches for integrating relational databases with Kafka, such as Kafka Connect, Oracle GoldenGate and bridging Kafka with Oracle Advanced Queuing (AQ).
RSVP Node.js class at www.nycdatascience.com
NYC data science academy's free workshop, given at NYC Open Data Meetup, http://www.meetup.com/NYC-Open-Data/events/163300552/
CouchApps are web applications built using CouchDB, JavaScript, and HTML5. CouchDB is a document-oriented database that stores JSON documents, has a RESTful HTTP API, and is queried using map/reduce views. This talk will answer your basic questions about CouchDB, but will focus on building CouchApps and related tools.
2. Brief historical retrospect of web develop
◦ Problems and challenge
Re-organization
◦ F2E & SDE1
◦ Process
◦ Benefits
Separation and Cooperation
◦ How did Y! do it
◦ How did Cisco WebEx do it
◦ Replicable model of any languages
3. 1 man, from end 2 end.
Small business
Most of the sites
look the same
No innovation
4. How to handle the big business dev?
How to improve the productivity?
How to improve the quality?
5.
6. We need Re-organization in our developer team.
9. For Company
Improve productivity
Improve product quality
Save cost
Professional
For Developer
Improve skill to be a master
Do what you want to do
Help for the career planning
10.
11. PHP
Maple System + PHP
What is the problem?
Hard to replace the mock data
Always lost the close tag
QA joined too later
12. Java + Freemarker + Data Interface Spec
What is the problem?
Environment data have no handled
Two frameworks conflict
15. Biz Data and Ajax Call Response
◦ It should be agreement with F2E and SDE
{
status: “SUCCESS|FAILURE”,
message: “Response report of current request”,
result: “Return value, it can be any data type,
Such as String, Array, Object,
F2E and SDE need agree on
the data structure here”
}
{
status: “SUCCESS|FAILURE”,
message: “Response report of current request”,
result: “Return value, it can be any data type,
Such as String, Array, Object,
F2E and SDE need agree on
the data structure here”
}
16. Environment Data
◦ It should be agreement with F2E and SDE
{
skinpath: “/resource/image/”,
jspath: “/resource/css/”,
rootpath: “/resource/js/”,
currentuser.cred: “U1U7EXG5”,
currentuser.username: “Charlie Du”
}
{
skinpath: “/resource/image/”,
jspath: “/resource/css/”,
rootpath: “/resource/js/”,
currentuser.cred: “U1U7EXG5”,
currentuser.username: “Charlie Du”
}
17. Environment Data
◦ In order to decide where are these environment data from @SDE
{
currentuser.cred: {
from: “session”
},
currentuser.username:{
from: “session”
}
}
{
currentuser.cred: {
from: “session”
},
currentuser.username:{
from: “session”
}
}
19. Form Data
Action URL
Items’ Name
Submit Method
Link URL
20. All these need to be a
document of
Data
Interface
Specification
finally!
Data Interface Specification
XXX Project
Version:0.1
F2E Owner: Charlie Du
SDE Owner: Bo Song
2010/10/09
21. I18N For JS
<@easySC.i18nJs path=“…/feed.js”/>
It should generated these codes:
<script type=“text/javascript” src=“…/feed_en_US.js”></script>
<script type=“text/javascript” src=“…/feed.js”></script>
“en_US” should match the client language.
22. I18N For Template
<@easySC.i18nMsg key=“feed.userinfo”
arguments=“Charlie” />
It should get the key “feed.userinfo” from the i18n properties and
pass the arguments to render the final content.
For example: feed.userinfo={0}'s Info
The result should be: Charlie’s Info
23. Biz Data Access
<@easySC.bizData name=“feed” service=“feed.feed_list”
param=“{pageSize:10,pageIndex:0}” />
“service” should match the mockdata/biz/feed/feed_list.json (Mock Env)
it as a Service Name on Production ENV.
“param” will be used by Production Env
“name” will be the returned value, a JSON Object from the .json mock data Or true
data.
Then, we can use the variable “feed” to access the data, such as: feed.status,
feed.message, feed.result.
24. Biz Data Access For AJAX Call
bizcall.ext [.do, .php, .asp(x)]
All Ajax call point to the JSONRPCHandler.ext, and post a field:
Name: bizcall
Value: {name:“feed”, service:“feed.feed_list”, params:{pageSize:10,pageIndex:0}}
Then, on the SDE side, they still can use the easySC.bizData’s Handle, on F2E
side, they can use the unique mock data.
Tips: you can build the request as a utility function. Such as “bizCall”
25. ENV Data Access
<@easySC.envData name=“username”
key=“currentuser.username” />
“key” should match the property “currentuser.username” in
mockdata/env/env.txt (Mock Env)
“name” will be the returned value: “Charlie Du”
Then, we can use the variable “username”
26. Directive can be implemented any way
with different languages!
The core is:
Access txt file and parse the content to a JSON Object on Mock
Env
Assembling true data to a JSON Object on Production Env
They should provide least 2 types of return value : JSON Object
and JSON Text (For Template and Ajax Call)