This lecture will cover the TDS (Tabular Data Stream) protocol in different aspects. We'll cover in brief form its history and then go over the specfication via a walk through method so the learner can understand how to navigate the documentation. Critical aspects of the protocol such as login, execution and result set processing will be covered. The majority of the lecture will cover how to collect the TDS logs from different ASE clients (OpenClient, ODBC, ADO.NET and jConnect) and how to format the traces in readable form. From there we'll look at problem scenarios and learn how toread the trace and utilize it for diagnostic purposes.
CICS is the power of mainframe. It has all the capabilities to handle online transactions. The ppt covers highly useful CICS concepts to refresh your CICS knowledge quickly.
The OCI capabilities has a long story to tell! Leverage OCI by moving your apps to cloud and accelerate your digital transformation towards a cost-effective, innovative and a high performing infrastructure. Dive into the details now.
Vertica is a column-oriented database management system. It stores data in columnar projections rather than rows. The document provides an overview of Vertica concepts such as column storage, hybrid storage, projections vs tables, and types of projections. It also describes Vertica objects like projections, views, tables, SQL functions, and sequences. Operations covered include DML statements, bulk data loading using COPY, bulk updating using MERGE, and exporting data. The document compares Vertica to Teradata and provides version information.
Android Services Black Magic by Aleksandar GargentaMarakana Inc.
Presented at Android Builders Summit on February 14th in Redwood Shores, CA by Aleksandar (Saša) Gargenta, from Marakana Inc.
For the complete slides from this talk go to http://mrkn.co/munz7
"The most interesting part of Android stack are the Android System Services. The 60+ such services expose the low level functionality, such as Power Management, Wifi, Camera, Sensors, GPS, Display, Audio, Media, and so on, all the hardware all the way up to the application layer. While each one is different, the all have certain similarities, namely the way they rely on Binder (Android's IPC mechanism), use JNI to cross Java-C boundary, and use of shared libraries to abstract the Linux drivers. In this talk, we'll explore the common system services in Android and discuss their architecture. You will get to see the diagrams of the inner workings of some of the previously undocumented parts of the Android stack. By the end of the talk, you should have a better understanding of the underpinnings of the backbone of Android OS."
https://events.linuxfoundation.org/events/android-builders-summit/gargentaa
Building Your Data Streams for all the IoTDevOps.com
Apache NIFI and OPC-UA have been game changers in the world of IoT, allowing you to automate the flow of data from IoT sensors to just about anywhere you want. In this presentation, Craig Hobbs of InfluxData will demonstrate how you can collect and process data streams of millions of values per second using an OPC-UA server and automate into an InfluxDB stream processing engine using Nifi. You'll learn how to setup your workflow to gain a highly-available, fast, scalable, and reliable data stream as a real-world application.
CICS is the power of mainframe. It has all the capabilities to handle online transactions. The ppt covers highly useful CICS concepts to refresh your CICS knowledge quickly.
The OCI capabilities has a long story to tell! Leverage OCI by moving your apps to cloud and accelerate your digital transformation towards a cost-effective, innovative and a high performing infrastructure. Dive into the details now.
Vertica is a column-oriented database management system. It stores data in columnar projections rather than rows. The document provides an overview of Vertica concepts such as column storage, hybrid storage, projections vs tables, and types of projections. It also describes Vertica objects like projections, views, tables, SQL functions, and sequences. Operations covered include DML statements, bulk data loading using COPY, bulk updating using MERGE, and exporting data. The document compares Vertica to Teradata and provides version information.
Android Services Black Magic by Aleksandar GargentaMarakana Inc.
Presented at Android Builders Summit on February 14th in Redwood Shores, CA by Aleksandar (Saša) Gargenta, from Marakana Inc.
For the complete slides from this talk go to http://mrkn.co/munz7
"The most interesting part of Android stack are the Android System Services. The 60+ such services expose the low level functionality, such as Power Management, Wifi, Camera, Sensors, GPS, Display, Audio, Media, and so on, all the hardware all the way up to the application layer. While each one is different, the all have certain similarities, namely the way they rely on Binder (Android's IPC mechanism), use JNI to cross Java-C boundary, and use of shared libraries to abstract the Linux drivers. In this talk, we'll explore the common system services in Android and discuss their architecture. You will get to see the diagrams of the inner workings of some of the previously undocumented parts of the Android stack. By the end of the talk, you should have a better understanding of the underpinnings of the backbone of Android OS."
https://events.linuxfoundation.org/events/android-builders-summit/gargentaa
Building Your Data Streams for all the IoTDevOps.com
Apache NIFI and OPC-UA have been game changers in the world of IoT, allowing you to automate the flow of data from IoT sensors to just about anywhere you want. In this presentation, Craig Hobbs of InfluxData will demonstrate how you can collect and process data streams of millions of values per second using an OPC-UA server and automate into an InfluxDB stream processing engine using Nifi. You'll learn how to setup your workflow to gain a highly-available, fast, scalable, and reliable data stream as a real-world application.
This document provides an overview of a SQL-on-Hadoop tutorial. It introduces the presenters and discusses why SQL is important for Hadoop, as MapReduce is not optimal for all use cases. It also notes that while the database community knows how to efficiently process data, SQL-on-Hadoop systems face challenges due to the limitations of running on top of HDFS and Hadoop ecosystems. The tutorial outline covers SQL-on-Hadoop technologies like storage formats, runtime engines, and query optimization.
ASE Performance and Tuning Parameters Beyond the cfg FileSAP Technology
The ASE configuration file contains a long list of changeable parameters, but many parameter still exists outside of the main configuration file. This session will be a discussion of hidden gems in other places that can make performance better for queries, networking, and system administration.
This document discusses the evolution of CPU management on IBM mainframes to account for new capabilities introduced in recent years. It describes technologies like zAAP, zIIP, IFL and Coupling Facility CPUs. It also addresses how traditional LPAR configuration and IRD management have become more complex as installations increase the number and diversity of LPARs. The document provides guidance on analyzing CPU utilization and evolving performance reporting to properly account for these new technologies and dynamic management capabilities.
Oracle Exadata is a database machine that combines hardware and software. It consists of database servers, storage servers, and InfiniBand switches housed in a self-contained rack. Exadata uses several technologies like smart scan, storage indexing, predicate filtering, and smart flash cache to improve database performance. It also features a flash cache, hybrid columnar compression, and an I/O resource manager. Exadata timelines show the improvements in each version from 2008 to the present. Exadata provides many benefits but also has some limitations and challenges around its closed architecture, integration with the optimizer, and high price tag.
Why I quit Amazon and Build the Next-gen Streaming SystemYingjun Wu
Yingjun Wu quit their job at Amazon to build a next-generation streaming system called RisingWave that is designed to be cloud-native with a managed service, SQL-oriented interface, and cloud-native architecture in order to reduce operation, development, and performance costs compared to traditional streaming systems. RisingWave aims to provide a streaming database with a simple interface and optimized architecture that fully leverages cloud resources.
The document discusses using the Oracle Database Configuration Assistant (DBCA) to create Oracle databases. It covers planning database design, choosing character sets, using DBCA to create a database including configuring files and memory, creating database design templates, and performing additional tasks with DBCA like deleting databases. The end summarizes how to create a database, generate scripts, manage templates, and perform additional DBCA tasks.
This document discusses storing product and order data as JSON in a database to support an agile development process. It describes creating tables with JSON columns to store this data, and using JSON functions like JSON_VALUE and JSON_TABLE to query and transform the JSON data. Examples are provided of indexing JSON columns for performance and updating product JSON to include unit costs by joining external data. The goal is to enable flexible and rapid evolution of the application through storing data in JSON.
Top 10 Mistakes When Migrating From Oracle to PostgreSQLJim Mlodgenski
As more and more people are moving to PostgreSQL from Oracle, a pattern of mistakes is emerging. They can be caused by the tools being used or just not understanding how PostgreSQL is different than Oracle. In this talk we will discuss the top mistakes people generally make when moving to PostgreSQL from Oracle and what the correct course of action.
This document discusses Oracle Cloud Infrastructure compute services. It describes the differences between bare metal instances, virtual machines, and dedicated hosts. It provides an overview of Oracle-provided images, bringing your own images, and creating custom images. Instance configurations, pools, and autoscaling policies are also covered. The document discusses instance metadata and lifecycle states like starting, stopping, rebooting, and terminating instances. It provides examples of using the instance metadata service and describes how billing works for different instance shapes depending on their state.
IMS DC is a hierarchical database management system supplied by IBM that runs on mainframe computers. It has two main components: Data Base (DB) processing and Data Communication (DC) processing. DC handles information in the form of messages that flow between remote terminals and application programs. Major kinds of online programs that can be written for IMS DC include inquiry programs, data entry programs, maintenance programs, and menu programs.
This is the presentation I made on the Hadoop User Group Ireland meetup in Dublin. It covers the main ideas of both MPP, Hadoop and the distributed systems in general, and also how to chose the best option for you
Jingwei Lu and Jason Zhang (Airbnb)
AirStream is a realtime stream computation framework built on top of Spark Streaming and HBase that allows our engineers and data scientists to easily leverage HBase to get real-time insights and build real-time feedback loops. In this talk, we will introduce AirStream, and then go over a few production use cases.
This document discusses SCAN, VIP, and HAIP in Oracle RAC environments. It provides details on:
- VIP - a virtual IP address that is not statically linked to a single node, allowing for faster failovers. Each node has a VIP.
- SCAN - a single virtual IP and listener that provides load balancing and high availability. SCAN acts as an abstraction layer so client connect strings do not need to change.
- HAIP - high availability IP addresses that allow clusterware and the database to use plumbed IP addresses for private interconnect traffic via solutions like bonding and trunking.
Apache Hive is a rapidly evolving project which continues to enjoy great adoption in the big data ecosystem. As Hive continues to grow its support for analytics, reporting, and interactive query, the community is hard at work in improving it along with many different dimensions and use cases. This talk will provide an overview of the latest and greatest features and optimizations which have landed in the project over the last year. Materialized views, the extension of ACID semantics to non-ORC data, and workload management are some noteworthy new features.
We will discuss optimizations which provide major performance gains as well as integration with other big data technologies such as Apache Spark, Druid, and Kafka. The talk will also provide a glimpse of what is expected to come in the near future.
Deep Dive on Amazon Aurora PostgreSQL Performance Tuning (DAT428-R1) - AWS re...Amazon Web Services
Amazon Aurora offers several options for monitoring and optimizing PostgreSQL database performance. These include Enhanced Monitoring and Performance Insights, an easy-to-use tool for assessing the load on your database and identifying slow-performing queries. In this session, learn how to tune the performance of your Aurora database with PostgreSQL compatibility, whether your application is in development or in production. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
Advancing IoT Communication Security with TLS and DTLS v1.3Hannes Tschofenig
Missing communication security is a common vulnerability in Internet of Things deployments. Addressing this vulnerability is, in theory, relatively easy: with TLS and DTLS, two widely used security protocols are available. They are used to secure web and smart phone apps.
In this talk Hannes Tschofenig explains how the TLS/DTLS 1.3 protocols work and how they differ from previous versions. Hannes also speaks about the performance improvements and how they help in IoT deployments.
Maximizing Database Tuning in SAP SQL AnywhereSAP Technology
This session illustrates the different tools available in SQL Anywhere to analyze performance issues, as well as describes the most common types of performance problems encountered by database developers and administrators. We also take a look at various tips and techniques that will help boost the performance of your SQL Anywhere database.
An In-Depth Look at SAP SQL Anywhere Performance FeaturesSAP Technology
This presentation examines the internal mechanism behind SQL Anywhere’s self-management and self-tuning functionality. Topics covered will illustrate how the various performance features, such as server cache management, self-healing statistics and dynamic multiprogramming level, work in concert to provide a robust data management solution in zero-administration environments.
This document provides an overview of a SQL-on-Hadoop tutorial. It introduces the presenters and discusses why SQL is important for Hadoop, as MapReduce is not optimal for all use cases. It also notes that while the database community knows how to efficiently process data, SQL-on-Hadoop systems face challenges due to the limitations of running on top of HDFS and Hadoop ecosystems. The tutorial outline covers SQL-on-Hadoop technologies like storage formats, runtime engines, and query optimization.
ASE Performance and Tuning Parameters Beyond the cfg FileSAP Technology
The ASE configuration file contains a long list of changeable parameters, but many parameter still exists outside of the main configuration file. This session will be a discussion of hidden gems in other places that can make performance better for queries, networking, and system administration.
This document discusses the evolution of CPU management on IBM mainframes to account for new capabilities introduced in recent years. It describes technologies like zAAP, zIIP, IFL and Coupling Facility CPUs. It also addresses how traditional LPAR configuration and IRD management have become more complex as installations increase the number and diversity of LPARs. The document provides guidance on analyzing CPU utilization and evolving performance reporting to properly account for these new technologies and dynamic management capabilities.
Oracle Exadata is a database machine that combines hardware and software. It consists of database servers, storage servers, and InfiniBand switches housed in a self-contained rack. Exadata uses several technologies like smart scan, storage indexing, predicate filtering, and smart flash cache to improve database performance. It also features a flash cache, hybrid columnar compression, and an I/O resource manager. Exadata timelines show the improvements in each version from 2008 to the present. Exadata provides many benefits but also has some limitations and challenges around its closed architecture, integration with the optimizer, and high price tag.
Why I quit Amazon and Build the Next-gen Streaming SystemYingjun Wu
Yingjun Wu quit their job at Amazon to build a next-generation streaming system called RisingWave that is designed to be cloud-native with a managed service, SQL-oriented interface, and cloud-native architecture in order to reduce operation, development, and performance costs compared to traditional streaming systems. RisingWave aims to provide a streaming database with a simple interface and optimized architecture that fully leverages cloud resources.
The document discusses using the Oracle Database Configuration Assistant (DBCA) to create Oracle databases. It covers planning database design, choosing character sets, using DBCA to create a database including configuring files and memory, creating database design templates, and performing additional tasks with DBCA like deleting databases. The end summarizes how to create a database, generate scripts, manage templates, and perform additional DBCA tasks.
This document discusses storing product and order data as JSON in a database to support an agile development process. It describes creating tables with JSON columns to store this data, and using JSON functions like JSON_VALUE and JSON_TABLE to query and transform the JSON data. Examples are provided of indexing JSON columns for performance and updating product JSON to include unit costs by joining external data. The goal is to enable flexible and rapid evolution of the application through storing data in JSON.
Top 10 Mistakes When Migrating From Oracle to PostgreSQLJim Mlodgenski
As more and more people are moving to PostgreSQL from Oracle, a pattern of mistakes is emerging. They can be caused by the tools being used or just not understanding how PostgreSQL is different than Oracle. In this talk we will discuss the top mistakes people generally make when moving to PostgreSQL from Oracle and what the correct course of action.
This document discusses Oracle Cloud Infrastructure compute services. It describes the differences between bare metal instances, virtual machines, and dedicated hosts. It provides an overview of Oracle-provided images, bringing your own images, and creating custom images. Instance configurations, pools, and autoscaling policies are also covered. The document discusses instance metadata and lifecycle states like starting, stopping, rebooting, and terminating instances. It provides examples of using the instance metadata service and describes how billing works for different instance shapes depending on their state.
IMS DC is a hierarchical database management system supplied by IBM that runs on mainframe computers. It has two main components: Data Base (DB) processing and Data Communication (DC) processing. DC handles information in the form of messages that flow between remote terminals and application programs. Major kinds of online programs that can be written for IMS DC include inquiry programs, data entry programs, maintenance programs, and menu programs.
This is the presentation I made on the Hadoop User Group Ireland meetup in Dublin. It covers the main ideas of both MPP, Hadoop and the distributed systems in general, and also how to chose the best option for you
Jingwei Lu and Jason Zhang (Airbnb)
AirStream is a realtime stream computation framework built on top of Spark Streaming and HBase that allows our engineers and data scientists to easily leverage HBase to get real-time insights and build real-time feedback loops. In this talk, we will introduce AirStream, and then go over a few production use cases.
This document discusses SCAN, VIP, and HAIP in Oracle RAC environments. It provides details on:
- VIP - a virtual IP address that is not statically linked to a single node, allowing for faster failovers. Each node has a VIP.
- SCAN - a single virtual IP and listener that provides load balancing and high availability. SCAN acts as an abstraction layer so client connect strings do not need to change.
- HAIP - high availability IP addresses that allow clusterware and the database to use plumbed IP addresses for private interconnect traffic via solutions like bonding and trunking.
Apache Hive is a rapidly evolving project which continues to enjoy great adoption in the big data ecosystem. As Hive continues to grow its support for analytics, reporting, and interactive query, the community is hard at work in improving it along with many different dimensions and use cases. This talk will provide an overview of the latest and greatest features and optimizations which have landed in the project over the last year. Materialized views, the extension of ACID semantics to non-ORC data, and workload management are some noteworthy new features.
We will discuss optimizations which provide major performance gains as well as integration with other big data technologies such as Apache Spark, Druid, and Kafka. The talk will also provide a glimpse of what is expected to come in the near future.
Deep Dive on Amazon Aurora PostgreSQL Performance Tuning (DAT428-R1) - AWS re...Amazon Web Services
Amazon Aurora offers several options for monitoring and optimizing PostgreSQL database performance. These include Enhanced Monitoring and Performance Insights, an easy-to-use tool for assessing the load on your database and identifying slow-performing queries. In this session, learn how to tune the performance of your Aurora database with PostgreSQL compatibility, whether your application is in development or in production. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
Advancing IoT Communication Security with TLS and DTLS v1.3Hannes Tschofenig
Missing communication security is a common vulnerability in Internet of Things deployments. Addressing this vulnerability is, in theory, relatively easy: with TLS and DTLS, two widely used security protocols are available. They are used to secure web and smart phone apps.
In this talk Hannes Tschofenig explains how the TLS/DTLS 1.3 protocols work and how they differ from previous versions. Hannes also speaks about the performance improvements and how they help in IoT deployments.
Maximizing Database Tuning in SAP SQL AnywhereSAP Technology
This session illustrates the different tools available in SQL Anywhere to analyze performance issues, as well as describes the most common types of performance problems encountered by database developers and administrators. We also take a look at various tips and techniques that will help boost the performance of your SQL Anywhere database.
An In-Depth Look at SAP SQL Anywhere Performance FeaturesSAP Technology
This presentation examines the internal mechanism behind SQL Anywhere’s self-management and self-tuning functionality. Topics covered will illustrate how the various performance features, such as server cache management, self-healing statistics and dynamic multiprogramming level, work in concert to provide a robust data management solution in zero-administration environments.
A Practical Deep Dive into Observability of Streaming Applications with Kosta...HostedbyConfluent
This document provides an overview of observability of streaming applications using Kafka. It discusses the three pillars of observability - logging, metrics, and tracing. It describes how to expose Kafka client-side metrics using interceptors, metric reporters, and the Spring Boot framework. It demonstrates calculating consumer lag from broker and client-side metrics. It introduces OpenTelemetry for collecting telemetry data across applications and exporting to various backends. Finally, it wraps up with lessons on monitoring consumer lag trends and selecting the right metrics to ship.
OData is becoming the "Lingua Franca" for data exchange across the internet. Serve up OData web services from most relational database backends requires a web server and 3rd-party custom components that translate OData calls into SQL statements (and vice-versa). SQLAnywhere 16.0 introduced a new OData Producer that does all this work for you automatically. This session will walk throught the setup and configuration of this new server process, and show real world examples on how to use it.
This lecture will include examples of how to use some of SQL Anywhere's lesser known but incredibly powerful functionality to deliver applications that can do more for your customers. Features discussed will include the SQL Anywhere http server, using external environments to call Java/C++/.Net libraries, materialized views, and more.
Discover How Volvo Cars Uses a Time Series Database to Become Data-DrivenDevOps.com
Volvo Cars uses InfluxDB to improve operations by empowering its developers. Self-service monitoring tools have enabled the company’s teams to become more agile and collaborative while ultimately improving product development lifecycle and incident management. Join this webinar to learn how Volvo Cars uses a time series database to become data-driven and accelerate innovation.
The document provides an overview of a Cisco webinar on the CCNP Data Center certification. It discusses:
1) The agenda which includes an overview of the data center environment, details on the CCNP Data Center certification, and skills and technologies required.
2) An overview of the CCNP Data Center certification exams which focus on the latest skills, technologies, and best practices. The certification can be achieved through 4 exams.
3) Technology spotlights on topics covered in the certification like unified computing, virtualization, and automation.
Is there a way that we can build our Azure Data Factory all with parameters b...Erwin de Kreuk
Is there a way that we can build our Data Factory all with parameters all based on MetaData? Yes there's and I will show you how to. During this session I will show how you can load Incremental or Full datasets from your sql database to your Azure Data Lake. The next step is that we want to track our history from these extracted tables. We will do this with Azure Databricks using Delta Lake. The last step that we want, is to make this data available in Azure SQL Database or Azure Synapse Analytics. Oh and we want to have some logging as well from our processes A lot to talk and to demo about during this session.
This document provides an overview and summary of Siemens S7-300 PLC programming. It covers the STEP 7 programming software, comparing CPU models and modules, addressing modules, loading memory, data types, and instructions for statement list programming, logic, math, timers, and more. Programming examples are also included at the end.
CCI2017 - Considerations for Migrating Databases to Azure - Gianluca Sartoriwalk2talk srl
The document discusses considerations for migrating databases to Microsoft Azure SQL Database. It covers cloud options like Infrastructure as a Service (IaaS) using SQL Server on Azure VMs and Platform as a Service (PaaS) options like Azure SQL Database. It also discusses analyzing database compatibility, different migration methods like using BACPAC files or the Data Migration Assistant, and ways to optimize the migration process like monitoring tempdb usage.
We all know that "knowledge is power", but how realistic is aiming for transparency in our own IT environments? The interaction between clients, servers, applications and users is often difficult to analyze, much less quantify. Come join Daniel Reimann to take a look at the history of your infrastructure and prepare you for future projects such as consolidations or infrastructure additions (e.g. IBM Connections). We will show you how and why you should be looking at your infrastructure as a whole, rather than individual technology silos. Find out where the hidden challenges of your IBM Notes/Domino environment are, what impact they have on your network and how you can fix it! A bolt of lightning for your DeLore...erm...infrastructure!
SAP FIORI COEP Pune - pavan golesar (ppt)Pavan Golesar
Hi,
This material is not for commercial purpose, Disclaimer: Copyright content included.
For learning purpose only.
sapparamount@gmail.com
Pavan Golesar
Wwt Corp. Overview & Data Center Presentation For HendeeChristopher Hendee
The document provides an overview of World Wide Technology (WWT), a privately held systems integrator with nearly $3 billion in revenue. WWT has over 1,300 employees globally and provides technology and supply chain services to customers in government, commercial, and service provider industries. It has strong partnerships with leading technology manufacturers and focuses on data center solutions, IT products/services, supply chain management, and professional services.
This document summarizes Tatiana Al-Chueyr's presentation on precomputing recommendations for BBC Sounds using Apache Beam. The initial pipeline had high costs due to processing large amounts of data in a single pipeline. Through several iterations, the pipeline was simplified and split into two pipelines - one to precompute recommendations and another to apply business rules. This reduced costs by 82% by using smaller machine types, batching, shared memory, and FlexRS in Apache Dataflow. Splitting the pipeline into minimal interfaces for each task led to more predictable behavior and lower costs.
ApacheCon @Home 2020
StreamPipes is an open source self-service IoT toolbox to enable non-technical users to connect, analyze and explore IoT data streams.
https://streampipes.apache.org/
This lecture will discuss and demonstrate how SQL Anywhere can be used in Internet of Things scenarios. Learn how to leverage the full power of SQL Anywhere in applications that run on single board computers like the Raspberry Pi. Topics discussed will include an overview of IoT, deployment and setup of SQL Anywhere on single board computers, and an example application to demonstrate the powerful and flexible applications that can be run anywhere and synchronize data back to an enterprise data source such as Hana Cloud Platform.
The document provides links to Microsoft technical articles on topics related to planning and deploying Lync Server including capacity planning, server hardware requirements, virtualization, front-end server components, large pools, Office Web Apps capacity, persistent chat, upgrade notes, response groups, pool failover, and network bandwidth requirements. It includes tables on the minimum number of front-end servers required for pools of different sizes and maximum and minimum video bitrates for different video codecs and resolutions.
Building and Managing your Virtual Datacenter using PowerShell DSC - Florin L...ITCamp
PowerShell DSC is a configuration management platform that provides the operations team the capability to deploy and manage systems by defining the desired configuration of a machine while having the assurance that whatever happens, the machines configuration will remain the same.
In this session you will learn what is PowerShell DSC, and how it can grant you the power of implementing a DevOps oriented environment by building and managing your infrastructure in an automatic and consistent fashion.
This document provides an overview of the Open Data Protocol (OData). It introduces OData, including its background and purpose in addressing problems with data APIs. The key aspects of OData are summarized, including its use of URIs and conventions for queries. Methods for producing and consuming OData feeds using ASP.NET Web API and LINQ are demonstrated. Best practices for OData producers and consumers are outlined. The document encourages the use of OData to empower users and unlock new revenue from organizational data.
Adaptive computing and pay-as-you-model for SAP.....
Customers should stop buying on CAPEX and signing long term contracts.... it is time to move to Consumption Based Computing...
Similar to Tabular Data Stream: The Binding Between Client and SAP ASE (20)
The document discusses SAP Integration Suite and its role in connecting enterprises and enabling innovation. Some key points:
- SAP Integration Suite is the foundation for agility and innovation by connecting systems, processes, and experiences across enterprises and ecosystems.
- It provides pre-built integration content and tools to accelerate integration projects, empower business users with self-service integration, and customize omnichannel experiences.
- Case studies showcase how companies have used SAP Integration Suite to boost digital sales, streamline processes, enhance the customer experience, and gain real-time insights.
Future-Proof Your Business Processes by Automating SAP S/4HANA processes with...SAP Technology
Automating repetitive manual business processes enables your employees to focus on key business priorities, reduce manual errors, and improve customer satisfaction. Learn how SAP Intelligent Robotic Process Automation helps you automate SAP S/4HANA business processes with pre-built bots and comprehensive development toolset.
7 Top Reasons to Automate Processes with SAP Intelligent Robotic Processes Au...SAP Technology
This document discusses how SAP Intelligent RPA can be used to automate SAP S/4HANA processes. It highlights top reasons to automate processes with SAP Intelligent RPA, including best-in-class integration with SAP applications, fully attended and unattended RPA capabilities, and comprehensive bot development tools. The document provides an example of how one company used SAP Intelligent RPA to automate tasks like auto-uploading documents and improving order closing, reducing time spent on those processes.
Extend SAP S/4HANA to deliver real-time intelligent processesSAP Technology
Organizations are facing challenges with complex data landscapes that limit their ability to gain insights and act with agility. SAP's Business Technology Platform helps address these challenges by enabling organizations to connect and share all their data, deliver real-time insights, and build intelligent apps and processes. The platform includes solutions for data management, analytics, application development and integration, as well as intelligent technologies. It provides the tools and capabilities to transform data into business value and realize an intelligent enterprise.
Process optimization and automation for SAP S/4HANA with SAP’s Business Techn...SAP Technology
This document discusses process optimization and automation for SAP S/4HANA. It provides statistics on the benefits of robotic process automation (RPA) and process mining. These include up to a 30% increase in productivity and a 50% reduction in data footprint. The document discusses tools for integration, process management, automation and monitoring from SAP that can be used to streamline, optimize and automate SAP S/4HANA processes. These include SAP Cloud Platform Integration Suite, SAP Intelligent Business Process Management, SAP Process Orchestration, SAP Intelligent RPA and SAP Process Mining by Celonis. Use cases and examples of automating SAP S/4HANA processes
Accelerate your journey to SAP S/4HANA with SAP’s Business Technology PlatformSAP Technology
Best run companies are already benefiting from SAP S/4HANA. Move to SAP S/4HANA quickly with confidence. SAP EIM and SAP Cloud Platform capabilities help you migrate data, curate, and govern master data, archive unused data, move custom code. With performance, functional, and security test tools, customers can move to SAP S/4HANA with confidence.
Accelerate Your Move to an Intelligent Enterprise with SAP Cloud Platform and...SAP Technology
Manage rapidly evolving business processes with greater ease. Minimize complex, time-consuming updates. Reduce high maintenance costs. Discover how enterprise IT teams are conquering the challenges of migrating from legacy ERP systems and innovating faster with SAP Cloud Platform and SAP S/4HANA.
Transform your business with intelligent insights and SAP S/4HANASAP Technology
Extend SAP S/4HANA with SAP’s Business Technology Platform to enable digital transformation through data-driven intelligence and process innovation. Learn how SAP’s Business Technology Platform delivers unprecedented opportunities for business process innovation.
SAP Cloud Platform for SAP S/4HANA: Accelerate your move to an Intelligent En...SAP Technology
Keep SAP S/4HANA clean to deliver a stable digital core and accelerate innovations. Leverage SAP Cloud Platform for SAP S/4HANA integration with SAP and non-SAP applications, side-by-side extensions, and innovations so that you can accelerate your move to SAP S/4HANA, deliver a stable S/4HANA system, and offer an agile development.
Innovate collaborative applications with SAP Jam Collaboration & SAP Cloud Pl...SAP Technology
Learn how SAP Jam Collaboration together with SAP Cloud Platform helps you to transform your existing applications by enabling collaboration inside applications and allows you to innovate next-generation collaborative applications. SAP Jam Collaboration also simplifies application development by providing a single platform to brainstorm, collaborate, share documents and track progress and streamline application roll-out and support by providing a place to share training material and self-service support forums. With SAP Jam Collaboration, you can also build a modern intranet with collaboration capabilities which is easy to update and accessible from mobile and outside corporate network. You can simplify IT landscape by consolidating software solutions such as document storage systems, blogging platform, wikis, collaboration platform and real-time messaging tools using SAP Jam Collaboration.
The document discusses how consumer expectations are changing and companies must adapt their business models to engage with each consumer individually through personalized and authentic relationships. It identifies collecting more consumer and market data from various sources, analyzing big data in real-time, and using technologies like IoT to enrich consumer experiences as important technology priorities for consumer companies. SAP Leonardo and its cloud and IoT capabilities can help companies with digital transformation, gain new consumer insights, and accelerate their business in order to engage digital consumers.
The IoT Imperative for Discrete Manufacturers - Automotive, Aerospace & Defen...SAP Technology
The document discusses how digital transformation through the Internet of Things (IoT) is disrupting discrete manufacturing industries. It recommends that manufacturers innovate by focusing on creating business value with IoT, investing in key digital scenarios, and integrating new technologies. IoT capabilities around connectivity, data management, analytics and cloud technology can help manufacturers improve quality, strategically manage assets, increase efficiency and engage customers.
IoT is Enabling a New Era of Shareholder Value in Energy and Natural Resource...SAP Technology
The document discusses how Internet of Things (IoT) technology can help energy and natural resource companies improve operations and drive innovation. Specifically, it explains that IoT allows these companies to do more with the sensor data generated by field assets by integrating it with cloud, analytics, and machine learning capabilities. This convergence of operations data with real-time business information through enhanced IoT capabilities allows workers to make better decisions. The document promotes SAP Leonardo's IoT capabilities for connecting sensor networks with core applications to optimize asset usage, minimize costs, and improve workforce safety in the energy and natural resources industry.
The IoT Imperative in Government and HealthcareSAP Technology
The document discusses how public service organizations can use Internet of Things (IoT) technologies to address challenges around aging populations, urbanization, and changing expectations. It notes that constrained budgets and increasing demands are stressing governments. The document advocates that organizations pursue digital transformation and use IoT to improve service efficiency, enhance smart cities, transform defense/security, and advance healthcare outcomes rather than just service volume. SAP Leonardo IoT capabilities can help organizations leverage big data and analytics to seize opportunities in these areas.
This presentation talks about how SAP S/4HANA can empower finance to strategically guide your business evolution via instant insights, intuitive user experience, and a flexible non-disruptive platform.
Five Reasons To Skip SAP Suite on HANA and Go Directly to SAP S/4HANASAP Technology
Don’t burn extra resources. Read on to discover five reasons to fly direct to SAP S/4HANA instead of adding a layover at SAP Suite on Hana (SoH) on the way.
SAP Helps Reduce Silos Between Business and Spatial DataSAP Technology
Discover how spatial solutions from SAP can help your business leverage geographic and spatial data to deliver location intelligence, increase insight, and improve efficiency. Solutions include SAP HANA, SAP BusinessObjects Analytics, SAP Geographical Enablement Framework, SAP GEO.e, Galigeo.
This presentation is a supplement to the "Why SAP HANA?" video on Youtube. Download and follow along to the video. An added bonus to this presentation is an Appendix and Presentation Notes slides.
Video Link: https://www.youtube.com/watch?v=VCEr9Y8ZrVQ
Spotlight on Financial Services with Calypso and SAP ASESAP Technology
Capital markets solutions from Calypso and SAP ASE can help your organization reduce the total number of systems in use, simplify business architecture, streamline processes, and improve efficiency – all while reducing total cost of ownership.
Spark Usage in Enterprise Business OperationsSAP Technology
At Spark Summit East 2016, SAP’s Ken Tsai highlighted how SAP HANA Vora extends Apache Spark to provide OLAP modeling capabilities and real-time query federation to enterprise data. You will learn real-world use cases where instant insight from a combination of enterprise and Hadoop data make an impact on everyday business operations.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Tabular Data Stream: The Binding Between Client and SAP ASE
1. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
TDSProtocol Reference
TDS - Tabular Data Stream
Reference and information to assist in
diagnosing client/server problems
Author: Paul Vero
March, 2015
2. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Contents
Introduction to TDS
Versions
PDF documents
TDS Documentation
Capture tools
Ribo
GuiProx
Sniffers
Srv.log
Built-in tracing
Tokens
Login
Simple select from execution to response, result set
Formats
Row Data
Parameters
Stored procedure calls with parameters
3. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Contents
Introduction toTDS
Versions
PDF documents
TDS Documentation
Capture tools
Ribo
GuiProx
Sniffers
Srv.log
Tokens
Login
Simple select from execution to response, result set
Formats
Row Data
Parameters
Stored procedure calls with parameters
4. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Introduction toTDS
• You always hear the Engineer or PSE say “Get me a
TDS trace!”
• “What the heck is that?”
• Tabular Data Stream
• It’s a token based, application level protocol used to
send requests and responses between clients and
servers
• Relies on a connection oriented transport service
• TCPIP provides such a transport
5. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Versions
• The TDS Specification has been around a long time
• Started with 4.X (4.2)
• The earliest I’m aware of is TDS 4.0, 1988
• Current TDS version is 5.0
• Released mid to late 90s
• Revisions are labeled as 3.X
• Latest revision is 3.9
• February 2007
6. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Contents
Introduction to TDS
Versions
PDF documents
TDSDocumentation
Capture tools
Ribo
GuiProx
Sniffers
Srv.log
Tokens
Login
Message Header
Simple select from execution to response, result set
Formats
Row Data
Parameters
Stored procedure calls with parameters
7. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
TDSdocumentation
• This is the internal TDS webpage:
• http://www-ocsd/TDS/
• Contains the current version:
• http://www-ocsd/TDS/specification/39/tds39.pdf
• Previous and archived versions
• The specification is in PDF format
• Contains reference for every supported TDS token
8. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
TDSdocumentation
• This section will provide additional information as it
walks through the TDS specification document,
tds39.pdf
• This is a high level explanation of the sections
• This is useful to understand the concept of a
client/server protocol
• This presentation will not cover every chapter in the
specification
• Guide to get you used to the document
9. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
What usesTDS?
• Any client for ASE uses TDS
• Think of it as the communicating protocol between
server (ASE) and client
• Supported clients:
• Open Client/Server
• ASE Data Access Products:
• ODBC, OLE DB, ADO.NET, jConnect
• Proxy servers and gateways:
• ASE/CIS, DirectConnect, Replication Server, IQ
10. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
TDSModel
SQLConnect();
SQLExecute()
SQLFetch();
ODBC Code
ODBC
API
TDS/Network
Layer
ODBC Driver
TDS/Network Layer
TDS LOGIN record
User name: sa
Password: []
Charset: cp850
11. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
SampleTDSSession
• When client is going to connect to
ASE it first requests a transport
connection (TCPIP) to the ASE server.
• Client then sends the login record to
start the dialog.
• ASE responds with an
acknowledgement token
(TDS_LOGINACK) along with a
completion token (TDS_DONE).
• When this occurs, it means the ASE
accepts the dialog with this client
12. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
SampleTDSSession
• With the established dialog the client
is now ready to send SQL statements
to the ASE.
• To send a SELECT statement client sends
it in TDS_LANGUAGE token.
• ASE executes the query and sends the
following:
• Column description of result set
• Data rows follow this
• TDS_DONE completion packet, with row count
is the last thing sent
13. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
SampleTDSSession
Login request
Client Server (ASE)
hostname: asehost
Username: tdsuser
Password: tdspwd
tds version: 5.0
Application: ado.net
Login request
TDS_LOGINACK
TDS_DONE
14. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
SendingTDSrequest
Language Request
Client Server (ASE)
- Execute: Select * from
authors
- Process the rows when
Received
- Stop processing when
Last row is received
TDS_LANGUAGE
TDS_ROWFMT
TDS_DONE
TDS_ROW, TDS_ROW, …
15. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Protocol Data Units- PDUs
PDU is used to contain the request (client) and response
(server)
The request/response might span several PDUs
Size of the PDU is decided between client and server
during the login time (dialog establishment)
PDU contains header and usually data
16. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
• PDU Header
• PDU header containsinformationaboutthesizeand
contentsof thePDU aswell asanindicationif thisisthe
lastPDU inarequest/response
• DescribedintheMessageBuffer Header referencepage
• TDS ishalf-duplex
• Client writes complete request
• Then it reads complete response from server
• None are intermixed and multiple requests can’t
be outstanding (active results pending)
17. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
PDU Header
18. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
PDU Header
• Before going further there are some
tools to be discussed in detail later,
used to format PDUs in readable
format
• RAW format displays data similar to that
found in sniffer trace
• Use ASE traceflags on command-line:
• 4001, 4013, 4034, 4035 and 4036
• 4013 prints out the login record contents
• Information goes to stdout so might want to use
tee
• Formated output with RIBO
• Java based TDS packet gateway tool
20. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Login recordfromASE errorlog
This traceflag, 4013 is useful to get the contents of the login
record:
00:00000:00017:2008/04/10 22:27:26.30 server Received LOGINREC
LOGINREC at 0x20AF7D88
host=`PVEROXP' user=`sa' hostproc=`5864'
int2=3 int4=1 char=6 flt=10 date=9
usedb=1 dmpld=1 interface=0 netconn_type=0
appname=`isql' servername=`ribo'
tds_vers=(5.0.0.0) progname=`CT-Library' prog_vers=(15.0.0.0)
noshort=0 flt4=13 date4=17
language=`' setlang=0
SECURITY: hier=0 e2e option: 0x00 db bulk reserved: 0x00
HA: ssn option: 0x08 ssn handle:(0x00, 0x00, 0x00, 0x00, 0x00, 0x00)
UNUSED: slunused:(0x00)
role=0
charset=`iso_1' setcharset=0 packetsize=`512'
00:00000:00017:2008/04/10 22:27:26.30 server login: sa PVEROXP, spid: 17, kpid:
1245203 (0x130013)
21. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
TDSoutput fromRIBO
Capture Record Header
Source [4]: REQUEST (0x00000001)
Length [4]: 615
PDU Header
TDS Packet Type [1]: BUF_LOGIN (0x02)
Status [1]: BUFSTAT_BEGIN (0x00)
Length [2]: 512
Channel [2]: 0
Packet No. [1]: 0
Window [1]: 0
PDU Header
TDS Packet Type [1]: BUF_LOGIN (0x02)
Status [1]: BUFSTAT_EOM (0x01)
Length [2]: 103
Channel [2]: 0
Packet No. [1]: 0
Window [1]: 0
22. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
PDU Data
• PDUs usually include some data
• Control PDUs don’t contain data
• Only header
• Request and Response PDUs contain TDS tokens
• Tokens describe the request/response
• Data is exchanged so the client and server can
complete the request/response cycle
23. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Client PDUs- Requests
Dialog establishment information
Language command
Cursor command
Database Remote Procedure Call
Attentions
Dynamic SQL command
Message command
24. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Client PDU: DialogEstablishment
• Create a transport connection
• This is a low level network function
• TCPIP and socket establishment
• Send Login record
• Credentials and properties
• Send Capability data stream
• What can I (client) do, what can’t I do
• What can and can’y you (the Server) do
• Authentication handshaking
• Password encryption
• Read login acknowledgement
25. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Client PDU: LanguageCommands
• Send a SQL Command to the ASE
• TDS_LANGUAGE token is used to send the command
• Command may span multiple PDUs
• Limited by length field in the TDS_LANGUAGE token
26. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Client PDU: Cursor Commands
• Therearetwowaystosendcursor commands
• Language commands
• TDS_LANGUAGE
• Send to ASE using Cursor T-SQL commands
• Other servers require to parse language and be able
to implement the cursor language
• Cursor TDS tokens
• ANSI specified cursor operations
• Native token support
• Efficient by eliminating parsing sequence
• TDS_CUR* tokens
27. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Client PDU: RemoteProcedureCalls
(RPC)
• Client sends TDS_DBRPC token
• Binary stream containing:
• Rpc NAME
• Options
• Parameters
• No intermixing with SQL commands or other RPC
commands
• Uses additional tokens to define the parameters and to
send parameter values
28. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Client PDU: Attention
The client sends an attention signal to the server to
cancel its current request
Client sends attention to server
Client continues to read
Client discards the data read after sending attention
Server acknowledges the attention and now
everyone is happy
29. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Client PDU: DynamicSQL
The client sends TDS_DYNAMIC or TDS_DYNAMIC2 data
stream to execute dynamic SQL on the server.
Stream indicates various functionality:
Prepare
Execute
Deallocate
Arguments included?
SQL Statement and identification
30. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Server PDU: Response
Dialog Establishment acknowledgement
Login request is received
Row results
Return status from procedures
Return parameters
Response completion
Error information
Attention acknowledgement
Cursor status
Message responses
31. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
On tothegorydetails
• The TDS specification documentation contains details
on various topics
• Read these when you’re ready to learn the topic, like
cursors, security, etc.
• Contains descriptions and diagrams of the data flow
32. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Contents
Introduction to TDS
Versions
PDF documents
TDS Documentation
Capturetools
Ribo
GuiProx
Sniffers
Srv.log
Built-in tracing
Tokens
Login
Message Header
Simple select from execution to response, result set
Formats
Row Data
Parameters
Stored procedure calls with parameters
33. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
CaptureTools
• Capture tools gather the TDS data stream into a file
• Useful for troubleshooting purposes
• Everyone asks for TDS trace
• Verify activity
• Determine problems
• Isolate whether client or server side
34. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
CaptureTools
Ribo
Most common
Formatted
Included in SDK and ASE installs
GuiProx
Useful when needing timestamps
Raw output
Sniffers
Used when unable to get others to work
Necessary if issues with TCP/IP protocol
ASE TDS
Trace options captures raw TDS between server and client
srv.log
Open Server log
Raw TDS
DirectConnect, DCO
35. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Ribo
• Ribo is most common tool
• Java based and requires these env variables:
• set RIBO_HOME=%SYBASE%jutils-2_0ribo
• set JAVA_HOME=C:Javajdkjdk1.5.0_09
• Just make sure this points to a jvm
• Start up with arguments
• ribo -l 9999 -s %1 -p %2 -gui -c cat -t -
f devfilter.x
36. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
RiboGui
Startup Ribo and this is
what you get
37. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
RiboGui
• Preferences provide
options
• Capture file
prefix
• Translation
filter
• Option to
translate
• Option to
display in a
window
• Careful as this
is resource
intensive
38. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
RiboGui: Displayin Window
39. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Riboisa TDSgateway
Client Workstation
Host: Rain
Connect to Ribo Host
ASE Server
Host: Snow
Port: 1502
Ribo Host: Rain
Port: 9999
Connects to
Snow,1502
Ribo –l 9999 –s Snow –p 1502 –gui …
40. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
RIBO environment
Windows example:
set RIBO_HOME=c:sapsyb157jutils-3_0ribo
set JAVA_HOME=C:sapsyb157jre64
set PATH=%JAVA_HOME%bin;%PATH%
c:
cd %RIBO_HOME%
41. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
SampleRIBO Output
Connect to ASE 15.0.2
Issue select @@version
Issue select * from tds_table
Table has int, varchar, decimal and datetime
Call stored procedure
sp_test @a, @b
@b is output parameter
42. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Select @@version
43. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Ribotracewalkthrough: Login
Record
Login Record; fixed length.
Host Name [30]: "PVEROXP"
Host Name Length [1]: 7
User Name [30]: "sa"
User Name Length [1]: 2
Password [30]: ""
Password Length [1]: 0
Host Process [30]: "3124"
Host Process Length [1]: 4
Byte Ordering - int2 [1]: 3
Byte Ordering - int4 [1]: 1
Character Encoding [1]: 6
Float Format [1]: 10
Date Format [1]: 9
lusedb [1]: 0x01
ldmpld [1]: 1
linterfacespare [1]: 0x00
44. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Login Record
Dialog Type [1]: 0
lbufsize [1]: 0
spare [3]: 0x000000
Application Name [30]: "isql"
Application Name Length [1]: 4
Service Name [30]: "ribo"
Service Name Length [1]: 4
Remote Passwords [255]:
<universal>/<null>
Remote Passwords Length [1]: 2
TDS Version [4]: 5.0.0.0
Prog Name [30]: "CT-Library"
Prog Name Length [1]: 10
Prog Version [4]: 15.0.0.0
Convert Shorts [1]: 0
4-byte Float Format [1]: 13
4-byte Date Format [1]: 17
45. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Login Record
Language [30]: ""
Language Length [1]: 0
Notify when Changed [1]: 0
Old Secure Info [2]: 0x0000
Secure Login Flags [1]: UNUSED
(0x00)
Bulk Copy [1]: 0
HA Login Flags [1]: 0x08
HA Session ID [6]:
0x000000000000
Spare [2]: 0x0000
Character Set [30]: "iso_1"
Character Set Length [1]: 5
Notify when Changed [1]: 0
Packet Size [6]: 512
Packet Size Length [1]: 3
47. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
GuiProx
• Java gateway similar to Ribo in ways
• Internal tool and provided to customers under specific
circumstances
• Raw TDS output
• Contains response and “think” times
• Timestamps
• Adjustable log size
48. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
GuiProx
[2014-03-10 12:48:52.124] Listening on port : 6001
[2014-03-10 12:48:52.124] Server host is : 10.168.131.204
[2014-03-10 12:48:52.124] Server port is : 1570
[2014-03-10 12:48:52.124] Hex dump filter : true
[2014-03-10 12:48:52.124] Sql filter : true
[2014-03-10 12:48:52.124] TimeStamp entries : true
[2014-03-10 12:48:52.134] Lilo dump filter : true
[2014-03-10 12:48:52.134] Gui display window : false
[2014-03-10 12:48:52.134] SSL Debugging : false
[2014-03-10 12:48:52.134] Capture Tds : true
[2014-03-10 12:48:52.134] Log Sql to file : false
[2014-03-10 12:48:52.134] **** _sendDelay set to 0 millisecs
[2014-03-10 12:48:52.134] **** _recvDelay set to 0 millisecs
[2014-03-10 12:48:52.134] **** _cancelDelay set to false
[2014-03-10 12:48:54.764] Thread-4: Opening socket to Server
[2014-03-10 12:48:54.764] Thread-4: Tds Logging to file : cap0.tds
50. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Network Sniffers
• Most operating systems contain built-in sniffers
• tcpdump (Linux)
• snoop (Solaris)
• Wireshark very popular and used to view captures
from other sniffer apps
• Learn TCP/IP to understand the packets
52. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Open Client debugvariable
C:...AtlantaTDS>set SYBOCS_DEBUG_FLAGS=CS_DBG_PROTOCOL
C:...AtlantaTDS>isql -Usa -Psecret -Spvw71572 -Dodbc
1> select * from tds_table
2> go
c1 c2 c3 c4
----------- -------------------- ------------- -------------------------------
1 TDS_LANGUAGE 2.1000 Mar 8 2015 9:56PM
2 TDS_DBRPC 14.6000 Mar 8 2015 9:56PM
3 TDS_CURDECLARE 8.6100 Mar 8 2015 9:56PM
4 TDS_DYNAMIC 14.7000 Mar 8 2015 9:56PM
5 TDS_ROW 13.1000 Mar 8 2015 9:56PM
(5 rows affected)
1> quit
Directory of C:...AtlantaTDS
03/10/2015 10:44 PM 2,104 capF6BF.tmp
53. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Open Client debugvariable
Run it through RIBO:
You can run the Ribo utility on the tds file to format in text format.
You can use a script:
ribo %1%2.tds %1%2.txt
mv or rename the *.tmp file to *.tds
In this case
ribo C:...AtlantaTDS capF6BF.tds
Creates capF6BF.txt file
You can view file in editor to check the TDS
54. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Protocol Capture
• Driversnowhavebuilt-inTDS tracegenerator
• Create in same TDS format as RIBO
• Use RIBO to format into Text
• jConnect: PROTOCOL_CAPTURE
• One thread/one connection
• Use RIBO for multi-thread/connections
• ASE ADO.NET and ODBC
• protocolcapture=filename.tds
• Appends pid and thread/connection number
• Starts at 0
55. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Contents
Introduction to TDS
Versions
PDF documents
TDSDocumentation
Capturetools
Ribo
GuiProx
Sniffers
Srv.log
Tokensand Flow
TDS_ENVCHANGE, EED, LANGUAGE, etc
Simple select from execution to response, result set
Formats
Row Data
Parameters
Stored procedure calls with parameters
56. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
TDSTokens
• TDS Tokens are the containers of the activity and
information
• Token expresses what it does
• Language, rpc, cursor, dynamic
• Is it data, message, or something else?
• Data portion contains the information
• Data from server response
• Format information for parameters and fields
• Messages, warnings and errors
• Other forms of status
57. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Environment Change
• TDS_ENVCHANGE (0xE3)
• The token contains environment change information
• Items that can change, based on ASE defaults and client
specified
• Current database
• Charset
• Language
• Packet size
58. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Environment Change-
TDS_ENVCHANGE (0xE3)
ENVCHANGE Token (0xE3); variable length.
Length [2]: 13
Type [1]: ENV_DB (0x01)
Length of New Value [1]: 4
New Value [4]: “odbc"
Length of Old Value [1]: 6
Old Value [6]: "master"
59. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Environment Change– usedatabase
exchange
LANGUAGE Token (0x21); variable length.
Length [4]: 10
Status [1]: UNUSED (0x00)
Text Length [0]: [9]
Text [9]: "use odbc
"
Capture Record Header
Source [4]: RESPONSE (0x00000002)
Length [4]: 101
PDU Header
TDS Packet Type [1]: BUF_RESPONSE (0x04)
Status [1]: BUFSTAT_EOM (0x01)
Length [2]: 101
Channel [2]: 0
Packet No. [1]: 0
Window [1]: 0
ENVCHANGE Token (0xE3); variable length.
Length [2]: 13
Type [1]: ENV_DB (0x01)
Length of New Value [1]: 4
New Value [4]: "odbc"
Length of Old Value [1]: 6
Old Value [6]: "master"
60. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Environment Change– usedatabase
exchange
EED Token (0xE5); variable length.
Length [2]: 65
Message Number [4]: 5701
Message State [1]: 1
Message Class [1]: 10
SQL State Length [1]: 5
SQL State [5]: "ZZZZZ"
Status [1]: NO_EED (0x00)
Transaction State [2]: TDS_TRAN_SUCCEED (0x0001)
Message Length [2]: 36
Message Text [36]: "Changed database context to 'odbc'.
"
Server Name Length [1]: 8
Server Name [8]: "pvxp1253"
Stored Proc. Name Length [1]: 0
Line Number [2]: 1
DONE Token (0xFD); fixed length.
Length [0]: [8]
Status [2]: DONE_FINAL (0x0000)
TranState [2]: TDS_TRAN_PROGRESS (0x0002)
Count (unused) [4]: 0
61. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
TDS_LANGUAGE
• Sends SQL statement to the ASE as a
language command
• Think of the command as what you’d send
via ISQL
• SELECT * FROM authors
• INSERT INTO table1 values (1, ‘a’, 12.3)
• Simple token
• Length [4 byte unsigned] – length limited to
0xFFFF
• 65535 bytes
• Status
• Language Text
62. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
TDS_LANGUAGE
Here’s the sample “select @@version”
LANGUAGE Token (0x21); variable length.
Length [4]: 18
Status [1]: UNUSED (0x00)
Text Length [0]: [17]
Text [17]: "select
@@version
"
63. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
ResponsetoTDS_LANGUAGE
Remember the PDU header, and this is a response:
Capture Record Header
Source [4]: RESPONSE (0x00000002)
Length [4]: 163
PDU Header
TDS Packet Type [1]: BUF_RESPONSE (0x04)
Status [1]: BUFSTAT_EOM (0x01)
Length [2]: 163
Channel [2]: 0
Packet No. [1]: 0
Window [1]: 0
64. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
ResponsetoTDS_LANGUAGE: Row
format
This is the TDS_ROWFMT2 token
Describes data type, length, and status of row
data
ROWFMT2 Token (0x61);
Length [4]: 18
Number of Columns [2]: 1
Column 1
Column Label Length [1]: 0
Catalog Name Length [1]: 0
Schema Length [1]: 0
Table Name Length [1]: 0
Column Name Length [1]: 0
Status [4]: ROW_UPDATABLE + ROW_NULLALLOWED (0x00000030)
User Type [4]: 0x00000002
Data Type [1]: VARCHAR
Length [1]: 255
Locale Length [1]: 0
65. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
ResponsetoTDS_LANGUAGE: Row
format
This is the TDS_ROWFMT2 token
Describes data type, length, and status of row
data
ROWFMT2 Token (0x61);
Length [4]: 18
Number of Columns [2]: 1
Column 1
Column Label Length [1]: 0
Catalog Name Length [1]: 0
Schema Length [1]: 0
Table Name Length [1]: 0
Column Name Length [1]: 0
Status [4]: ROW_UPDATABLE + ROW_NULLALLOWED (0x00000030)
User Type [4]: 0x00000002
Data Type [1]: VARCHAR
Length [1]: 255
Locale Length [1]: 0
66. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
ResponsetoTDS_LANGUAGE: Row
format
This is the TDS_ROWFMT2 token
Describes data type, length, and status of row
data
ROWFMT2 Token (0x61);
Length [4]: 18
Number of Columns [2]: 1
Column 1
Column Label Length [1]: 0
Catalog Name Length [1]: 0
Schema Length [1]: 0
Table Name Length [1]: 0
Column Name Length [1]: 0
Status [4]: ROW_UPDATABLE + ROW_NULLALLOWED (0x00000030)
User Type [4]: 0x00000002
Data Type [1]: VARCHAR
Length [1]: 255
Locale Length [1]: 0
67. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
ResponsetoTDS_LANGUAGE: Row
format
This is the TDS_ROWFMT2 token
Describes data type, length, and status of row
data
ROWFMT2 Token (0x61);
Length [4]: 18
Number of Columns [2]: 1
Column 1
Column Label Length [1]: 0
Catalog Name Length [1]: 0
Schema Length [1]: 0
Table Name Length [1]: 0
Column Name Length [1]: 0
Status [4]: ROW_UPDATABLE + ROW_NULLALLOWED (0x00000030)
User Type [4]: 0x00000002
Data Type [1]: VARCHAR
Length [1]: 255
Locale Length [1]: 0
68. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
TDS_ROWFMT2 Data Type
The TDS Datatype section describes the fields for the TDS
datatype in the format tokens:
69. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
TDS_ROW: Thedata
The TDS_ROW, 0xD1, token represents a row of data
ROW Token (0xD1); variable length.
Column 1
Length [1]: 117
Row data [117]: "Adaptive Server
Enterprise/12.5.3/EBF 13331 ESD#7/P/NT (IX86)/OS
4.0/ase1253/1951/32-bit/OPT/Fri Mar 24 02:17:56 2006"
(0x41646170746976652053657276657220456E74657270726973652F3
1322E352E332F4542462031333333312045534423372F502F4E5420284
9583836292F4F5320342E302F617365313235332F313935312F33322D6
269742F4F50542F467269204D61722032342030323A31373A353620323
03036)
70. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
TDS_DONE – When thingsaredone
• The TDS_DONE token indicates completion status of the
response
• One for each statement
• On selects it returns rowcount
DONE Token (0xFD); fixed length.
Length [0]: [8]
Status [2]: DONE_COUNT (0x0010)
TranState [2]: TDS_TRAN_PROGRESS (0x0002)
Count [4]: 1
71. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Another select example
This example selects four columns:
create table tds_table
(
c1 int not null,
c2 varchar(20),
c3 numeric(10,4),
c4 datetime,
constraint tds_pk primary key (c1)
)
72. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
ODBC Connection toselect
You can use the login packet to identify the application and
the client
Login Record; fixed length.
Host Name [30]: "PVEROXP"
Host Name Length [1]: 7
User Name [30]: "sa"
User Name Length [1]: 2
Password [30]: "sybase"
Password Length [1]: 6
Host Process [30]: "5580"
Host Process Length [1]: 4
Application Name [30]: "ODBCTEST"
Application Name Length [1]: 8
Service Name [30]: ""
Service Name Length [1]: 0
Remote Passwords [255]: <universal>/"sybase"
Remote Passwords Length [1]: 8
TDS Version [4]: 5.0.0.0
Prog Name [30]: "tdsconn"
Prog Name Length [1]: 7
Prog Version [4]: 12.0.5.0
Convert Shorts [1]: 0
4-byte Float Format [1]: 13
4-byte Date Format [1]: 17
Language [30]: "us_english"
Language Length [1]: 10
73. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
GettingspidfromTDS_DONE
responsetoTDS_LOGIN
• On newer ASE servers [verify the earliest] the
TDS_DONE response to Login request contains the
spid of the session
• Excellent way to match the TDS trace with the ASE
session for trouble-shooting purposes
DONE Token (0xFD); fixed length.
Length [0]: [8]
Status [2]: DONE_FINAL (0x0000)
TranState [2]: TDS_TRAN_PROGRESS (0x0002)
Count (unused) [4]: 18
74. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
TheTDS_LANGUAGE
The select is executed via the TDS_LANGUAGE event
When ODBCTEST calls SQLExecDirect(“SELECT * FROM
tds_table”)
LANGUAGE Token (0x21); variable length.
Length [4]: 24
Status [1]: UNUSED (0x00)
Text Length [0]: [23]
Text [23]: "select * from tds_table"
75. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Contents
Introduction to TDS
Versions
PDF documents
TDSDocumentation
Capturetools
Ribo
GuiProx
Sniffers
Srv.log
Tokensand Flow
TDS_ENVCHANGE, EED, LANGUAGE, etc
Simple select from execution to response, result set
Formats
Row Data
Parameters
Stored procedure calls with parameters
76. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Formats
• Rowformatsareprovidedfor rowresultsandcontain
descriptioninformationfor thespecifiedcolumn
• TDS tokensTDS_ROWFMT andTDS_ROWFMT2
providethisinformation.
• Used by applications when binding the fields to
program variables
• The same concept occurs with parameters
• TDS_PARAMFMT/PARAMFMT2 provide parameter
information
77. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
TDS_ROWFMT2 exposed
ROWFMT2 Token (0x61);
Length [4]: 139
Number of Columns [2]: 4
Column 1
Column Label Length [1]: 2
Column Label [2]: "c1"
Catalog Name Length [1]: 4
Catalog [4]: "odbc"
Schema Length [1]: 3
Scehma [3]: "dbo"
Table Name Length [1]: 9
Table Name [9]: "tds_table"
Column Name Length [1]: 2
Column Name [2]: "c1"
Status [4]: ROW_UPDATABLE (0x00000010)
User Type [4]: 0x00000007
Data Type [1]: INT4
Length [0]: [4]
Locale Length [1]: 0
78. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
TDS_PARAMFMT exposed
LANGUAGE Token (0x21); variable length.
Length [4]: 56
Status [1]: PARAMETERIZED
(0x01)
Text Length [0]: [55]
Text [55]: "select * from
tds_table where c1=@dr_ta0 and c2=@dr_ta1"
79. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
TDS_PARAMFMT exposed
PARAMFMT Token (0xEC); variable length.
Length [2]: 34
Number of Params [2]: 2
Param 1
Name Length [1]: 7
Name [7]: "@dr_ta0"
Status [1]: PARAM_NULLALLOWED (0x20)
User Type [4]: 0
Data Type [1]: INTN
Length [1]: 4
Locale Length [1]: 0
Param 2
Name Length [1]: 7
Name [7]: "@dr_ta1"
Status [1]: PARAM_NULLALLOWED (0x20)
User Type [4]: 0
Data Type [1]: CHAR
Length [1]: 255
Locale Length [1]: 0
80. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Thetds_tablerowformats
1> select * from tds_table
2> go
c1 c2 c3 c4
----------- -------------------- ------------- -------------------------------
1 TDS_LANGUAGE 2.1000 Mar 8 2015 9:56PM
2 TDS_DBRPC 14.6000 Mar 8 2015 9:56PM
3 TDS_CURDECLARE 8.6100 Mar 8 2015 9:56PM
4 TDS_DYNAMIC 14.7000 Mar 8 2015 9:56PM
5 TDS_ROW 13.1000 Mar 8 2015 9:56PM
81. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Thetds_tablerowformats
ROWFMT Token (0xEE);
Length [2]: 46
Number of Columns [2]: 4
Column 1
Name Length [1]: 2
Name [2]: "c1"
Status [1]: ROW_UPDATABLE (0x10)
User Type [4]: 7
Data Type [1]: INT4
Length [0]: [4]
Locale Length [1]: 0
82. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Thetds_tablerowformats
Column 2
Name Length [1]: 2
Name [2]: "c2"
Status [1]: ROW_UPDATABLE (0x10)
User Type [4]: 2
Data Type [1]: VARCHAR
Length [1]: 20
Locale Length [1]: 00
83. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Thetds_tablerowformats
Column 3
Name Length [1]: 2
Name [2]: "c3"
Status [1]: ROW_UPDATABLE (0x10)
User Type [4]: 10
Data Type [1]: NUMN
Length [1]: 6
Precision [1]: 10
Scale [1]: 4
Locale Length [1]: 0
84. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Thetds_tablerowformats
Column 4
Name Length [1]: 2
Name [2]: "c4"
Status [1]: ROW_UPDATABLE (0x10)
User Type [4]: 12
Data Type [1]: DATETIM
Length [0]: [8]
Locale Length [1]: 0
85. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Thetds_tablerowformats: Results
ROW Token (0xD1); variable length.
Column 1
Length [0]: [4]
Row data [4]: 1 (0x00000001)
Column 2
Length [1]: 12
Row data [12]: "TDS_LANGUAGE" (0x5444535F4C414E4755414745)
Column 3
Length [1]: 6
Row data [6]: 2.1000
Column 4
Length [0]: [8]
Row data [8]: 2015-03-08 21:56:51.533
86. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Contents
Introduction to TDS
Versions
PDF documents
TDSDocumentation
Capturetools
Ribo
GuiProx
Sniffers
Srv.log
Tokensand Flow
TDS_ENVCHANGE, EED, LANGUAGE, etc
Simple select from execution to response, result set
Formats
Row Data
Parameters
Stored procedurecallswith parameters
87. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Thesp_tds_procprocedure
create procedure sp_tds_proc (
@p1 int,
@p2 varchar(50),
@p3 varchar(50) output
)
as
select c3, c4 from tds_table where c1= @p1 and c2 = @p2
select @p3 = ‘Sent from sp_tds_proc’
return 0
go
88. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Thesp_tds_procprocedure
- Cover four concepts of stored procedures
- Input/Output parameters
- Return status
- Formats (similar to Row Results)
- Result set processing
89. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Thesp_tds_proc
procedure
Parameter: 3 Value: null
Parameter buffer size: 4 chars
Enter data type :char
Enter parameter name with '@':@p3
input or output parameter?:output
target_format.status : 1024
Check for null
ct_results : retcode : SUCCEED
TYPE: ROW_RESULT
----------Row 1----------
c3 : 2.1000
c4 : Mar 8 2015 9:56PM
ct_results : retcode : SUCCEED
TYPE: CMD_DONE
(1 rows affected)
ct_results : retcode : SUCCEED
TYPE: STATUS_RESULT
----------Row 1----------
Return Status : 0
ct_results : retcode : SUCCEED
TYPE: PARAM_RESULT
----------Row 1----------
@p3 : Sent from sp_tds_proc
ct_results : retcode : SUCCEED
TYPE: CMD_DONE
(1 rows affected)
ct_results : retcode : END_RESULTS
Run it through sqltest
Enter the values:
1> sp_tds_proc
1,'TDS_LANGUAGE',null
Parameter: 1 Value: 1
Parameter buffer size: 1 chars
Enter data:int
Enter parameter name with '@':@p1
input or output parameter?:input
target_format.status : 256
current_token is 1
current_token length is 1
Parameter: 2 Value: 'TDS_LANGUAGE'
Parameter buffer size: 14 chars
Enter data type :char
Enter parameter name with '@':@p2
input or output parameter?:input
target_format.status : 256
Check for null
90. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Parameters
Param 2
Name Length [1]: 3
Name [3]: "@p2"
Status [1]: <unrecognized> (0x00)
User Type [4]: 0
Data Type [1]: LONGCHAR
Length [4]: 32676
Locale Length [1]: 0
Param 3
Name Length [1]: 3
Name [3]: "@p3"
Status [1]: PARAM_RETURN (0x01)
User Type [4]: 0
Data Type [1]: LONGCHAR
Length [4]: 32676
Locale Length [1]: 0
PARAMS Token (0xD7); variable length.
Param 1
Length [1]: 4
Param data [4]: 1 (0x00000001)
Param 2
Length [4]: 12
Param data [12]: "TDS_LANGUAGE"
Param 3
Length [4]: 0
Param data [0]: [null]
- Procedure takes input
parameters to pass values to ASE
- Output parameters return
information from ASE
PARAMFMT2 Token (0x20); variable
length.
Length [4]: 53
Number of Params [2]: 3
Param 1
Name Length [1]: 3
Name [3]: "@p1"
Status [1]: <unrecognized>
(0x00)
User Type [4]: 0
Data Type [1]: INTN
Length [1]: 4
Locale Length [1]: 0
91. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Response
ROWFMT Token (0xEE);
Length [2]: 25
Number of Columns [2]: 2
Column 1
Name Length [1]: 2
Name [2]: "c3"
Status [1]: ROW_UPDATABLE (0x10)
User Type [4]: 10
Data Type [1]: NUMN
Length [1]: 6
Precision [1]: 10
Scale [1]: 4
Locale Length [1]: 0
Column 2
Name Length [1]: 2
Name [2]: "c4"
Status [1]: ROW_UPDATABLE (0x10)
User Type [4]: 12
Data Type [1]: DATETIM
Length [0]: [8]
Locale Length [1]: 0
ROW Token (0xD1); variable length.
Column 1
Length [1]: 6
Row data [6]: 2.1000
Column 2
Length [0]: [8]
Row data [8]: 2015-03-08 21:56:51.533
- Row Results
-Format
-Data
- DONEINPROC tokens
-One each for Row results, 1 row
affected
-Return Status
-Counts as another row affected
-Output parameter
-Handled as row result mechanical
- Counts as another row affected
- RETURN Status
-TDS_RETURNSTATUS
-Returns status information to
client
-Can be used to flag various types
of errors
- Output parameter
-PARAMFMT
-PARAMS (Data returned, treated
like row result)
92. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Response
RETURNSTATUS Token (0x79); fixed length.
Length [0]: [4]
Status [4]: 0x00000000
PARAMFMT Token (0xEC); variable length.
Length [2]: 17
Number of Params [2]: 1
Param 1
Name Length [1]: 3
Name [3]: "@p3"
Status [1]: PARAM_RETURN
(0x01)
User Type [4]: 2
Data Type [1]: LONGCHAR
Length [4]: 16384
Locale Length [1]: 0
PARAMS Token (0xD7); variable length.
Param 1
Length [4]: 21
Param data [21]: "Sent from
sp_tds_proc"
DONE Token (0xFD); fixed length.
Length [0]: [8]
Status [2]: DONE_FINAL
(0x0000)
TranState [2]: TDS_TRAN_PROGRESS
(0x0002)
Count (unused) [4]: 1
DONEINPROC Token (0xFF); fixed length.
Length [0]: [8]
Status [2]: DONE_MORE + DONE_COUNT
+ DONE_EVENT (0x0051)
TranState [2]: TDS_TRAN_PROGRESS
(0x0002)
Count [4]: 1
DONEINPROC Token (0xFF); fixed length.
Length [0]: [8]
Status [2]: DONE_MORE + DONE_COUNT
+ DONE_EVENT (0x0051)
TranState [2]: TDS_TRAN_PROGRESS
(0x0002)
Count [4]: 1
DONEINPROC Token (0xFF); fixed length.
Length [0]: [8]
Status [2]: DONE_MORE + DONE_COUNT
+ DONE_EVENT (0x0051)
TranState [2]: TDS_TRAN_PROGRESS
(0x0002)
Count [4]: 1
93. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
RETURNSTATUS
Used to provide status from the procedure.
Application code can test this value to determine various events
ASE provides some negative values for various errors
Coder can test values – so like”8” might mean we just called 8 triggers to update
various tables.
94. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Summary
TDS used to troubleshoot errors
On client or ASE?
Reverse Engineer
Can use the output to write test code to represent
problem
Understand communication between Server and Client
95. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
Question andAnswer
Questions
96. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
ISUG articles
• TDS Revealed: A Primer to Understanding the
Communication Language of ASE
• Issue 2, 2009
• Chomping on the Network with Wireshark
• November 2010 and January 2011
• TDS Case Studies: Root Cause Analysis Techniques
for ASE Drivers
• September 2010
97. (c) 2015 Independent SAP Technical User
Group
Annual Conference, 2015
MoreResources
• Product Documentation
•
http://help.sap.com/adaptive-server-enterprise?current= database&show_children= false
• How to get best results from an SAP Search
• https://service.sap.com/sap/support/notes/2081285
• Service Market Place/Support
• https://service.sap.com
• https://support.sap.com/home.html
• SAP Communities
• http://scn.sap.com/community/sybase-adaptive-server-enterprise
• http://scn.sap.com/community/developer-center/oltp-db
• Social Media Product Support Channels
• https://www.facebook.com/SapProductSupport
• https://twitter.com/SAPSupporthelp