This document discusses using libcurl's share API to share data like cookies and DNS caches between multiple easy handles. It explains that some curl state is kept in the easy handle, so transfers using different handles may not be fully independent. The share API allows creating share objects that specify what data to share, such as cookies and DNS caches. Easy handles can then specify which share objects to use to share data between transfers and achieve better performance than using separate handles independently.
200 giochi a tena per la tattica di squadra nel calcio. Michel Dumont
http://www.calzetti-mariucci.it/shop/prodotti/200-giochi-a-tema-per-la-tattica-di-squadra-nel-calcio
Daniel Stenberg goes through some basic libcurl fundamentals and API design and explain how easily you can get your first transfers going in your own application. libcurl is the defacto standard library for Internet transfers and runs on virtually all platforms. The language focus will be on C/C++ but the concepts are generally applicable even if you use libcurl bindings for other languages.
Video games are written as a main loop: process player input, update the state of the game, render a new frame to the screen, repeat. They do this 60 times a second, with millisecond timing. Most monitoring tools are also written as loops: send a probe, wait for the response, update a data store, sleep. Often this is done pretty slowly, maybe once a second! In video games if you can’t update fast enough, you skip the rendering step and the frame rate drops. With monitoring tools if your loop takes to long you also stop logging data as often, and instead of choppy gameplay you get gaps in your graphs, often when you need that data the most!
Let’s use ping as an example and see how we can rewrite its main loop to function more like a video game, keeping a high frame rate.
200 giochi a tena per la tattica di squadra nel calcio. Michel Dumont
http://www.calzetti-mariucci.it/shop/prodotti/200-giochi-a-tema-per-la-tattica-di-squadra-nel-calcio
Daniel Stenberg goes through some basic libcurl fundamentals and API design and explain how easily you can get your first transfers going in your own application. libcurl is the defacto standard library for Internet transfers and runs on virtually all platforms. The language focus will be on C/C++ but the concepts are generally applicable even if you use libcurl bindings for other languages.
Video games are written as a main loop: process player input, update the state of the game, render a new frame to the screen, repeat. They do this 60 times a second, with millisecond timing. Most monitoring tools are also written as loops: send a probe, wait for the response, update a data store, sleep. Often this is done pretty slowly, maybe once a second! In video games if you can’t update fast enough, you skip the rendering step and the frame rate drops. With monitoring tools if your loop takes to long you also stop logging data as often, and instead of choppy gameplay you get gaps in your graphs, often when you need that data the most!
Let’s use ping as an example and see how we can rewrite its main loop to function more like a video game, keeping a high frame rate.
Fault tolerance in general is a challenging topic. Yet we need fault toleranct designs more badly than ever in order to provide robust, highly available systems - especially in times of scale out systems becoming more and more popular.
Unfortunately, most developers do not care too much about a fault tolerant design, either because they are scared by the complexity of the realm or because they do not care enough. One of the problems is that a lack of fault tolerant design does not hurt a lot in development or in QA, but it hurts a lot in production - as Michael Nygard said: "It's all about production!" (at least figuratively).
In this presentation I do *not* try to give a general introduction to fault tolerant design. Instead I pick a few generic case studies that demonstrate the results of missing fault tolerant design, try to sensitize a bit about the production relevance of fault tolerant design and then go along with a few selected patterns. I picked a few patterns which are surprisingly easy to implement and help to mitigate the problems of the former case studies.
This way I try to show two things:
1. A piece of architecture or design as a pattern is not necessarily hard to implement. Sometimes the code is written quicker than it takes to explain the pattern beforehand.
2. Even if fault tolerant design as a general topic might be hard, some parts of it can be implemented very easily and it's more than worth the coding effort if you look how much better your system behaves in production just from adding those few lines of code.
Please help with the below 3 questions, the python script is at the.pdfsupport58
Please help with the below 3 questions, the python script is at the bottom, I cannot get it to work
correctly please indicate where the error is. Thanks
Question-01: Approximately how much longer does it take to do a round-trip ping from/to a
remote machine than from/to localhost? (Note, answers may vary if you are doing the
experiment from your home or from the CS building itself and whether the destination is in
North America or some other continent).
Question-02: Currently, the program calculates the round-trip time for each packet and prints it
out individually. Modify this to correspond to the way the standard ping program works. You
will need to report the minimum, maximum, and average RTTs at the end of all pings from the
client. In addition, calculate the packet loss rate (in percentage).
Question-03: Your program can only detect timeouts in receiving ICMP echo responses. Modify
the Pinger program to parse the ICMP response error codes and display the corresponding error
results to the user. Examples of ICMP response error codes are 0: Destination Network
Unreachable, 1: Destination Host Unreachable.
In this lab, you will gain a better understanding of Internet Control Message Protocol (ICMP).
You will learn to implement a Ping application using ICMP request and reply messages. Ping is
a computer network application used to test whether a particular host is reachable across an IP
network. It is also used to self-test the network interface card of the computer or as a latency test.
It works by sending ICMP echo reply packets to the target host and listening for ICMP echo
reply replies. The "echo reply" is sometimes called a pong. Ping measures the round-trip time,
records packet loss, and prints a statistical summary of the echo reply packets received (the
minimum, maximum, and the mean of the round-trip times and in some versions the standard
deviation of the mean).
Your task is to develop your own Ping application in Python. Your application will use ICMP
but, in order to keep it simple, will not exactly follow the official specification in RFC 1739.
Note that you will only need to write the client side of the program, as the functionality needed
on the server side is built into almost all operating systems. You should complete the Ping
application so that it sends ping requests to a specified host separated by approximately one
second. Each message contains a payload of data that includes a timestamp. After sending each
packet, the application waits up to one second to receive a reply. If one second goes by without a
reply from the server, then the client assumes that either the ping packet or the pong packet was
lost in the network (or that the server is down).
This lab requires you to compose new python code. A skeleton framework is given, you will
need to fill in the blanks.
This lab will require you to build and/or decode a packed binary array of data that is specified by
the ICMP protocol. To assist you, the ICMP protocol speci.
For this project, you must complete the provided partial C++ program.pdffathimaoptical
for c++ programing class Write a program that converts gallons into liters. One gallon=3.785
liters. Display the tittle of \"gallons\" and \"liters\" first. Then the program should display gallons
from 10 to 20 in one-gallon increments and the corresponding liter equaivalents.
Solution
#include
#define GtoL 3.785 //Liters per gallon.
int main()
{
float liters; //liters cornverted by gallons.
int count; //number of gallons to be converted to liters.
count=10; //starting number of gallons to be converted.
liters = count*GtoL; //equation for converting gallons to liters.
printf(\" Convertor - Gallons to Liters\");
while (count<=20)
{
printf(\"%d gallons = %.3f liters\ \",count, liters);
liters = count*GtoL;
count++;
}
return 0;.
Container Orchestration from Theory to PracticeDocker, Inc.
Join Laura Frank and Stephen Day as they explain and examine technical concepts behind container orchestration systems, like distributed consensus, object models, and node topology. These concepts build the foundation of every modern orchestration system, and each technical explanation will be illustrated using Docker’s SwarmKit as a real-world example. Gain a deeper understanding of how orchestration systems like SwarmKit work in practice and walk away with more insights into your production applications.
Daniel Stenberg discusses some of the most common mistakes users are doing when using libcurl and what to do about them.
Video: https://youtu.be/0KfDdIAirSI
So you're done developing your web service, but what will happen when you deploy it to production? In other words: how ready is your service to receive live traffic? In this talk you'll learn what it takes to transition your web service from "done" to "ready" and make sure that you can release to production seamlessly and with confidence.
BUMP implementation in Java.docxThe project is to implemen.docxhartrobert670
BUMP implementation in Java.docx
The project is to implement the BUMP client in java, with window size 1. Here is an overview of the three WUMP protocols (BUMP, HUMP, and CHUMP). Here are the files wumppkt.java, containing the packet format classes, and wclient.java, which contains an outline of the actual program. Only the latter file should be modified; you should not have to make changes to wumppkt.java.
What you are to do is the following, by modifying and extending the wclient.java outline file:
· Implement the basic transfer
· Add all appropriate packet sanity checks: timeouts, host/port, size, opcode, and block number
· Generate output. The transferred file is to be written to System.out. A status message about every packet (listing size and block number) is to be written to System.err. Do not confuse these!
· Terminate after a packet of size less than 512 is received
· Implement an appropriate "dallying" strategy
· send an ERROR packet if it receives a packet from the wrong port. The appropriate ERRCODE in this case is EBADPORT.
An outline of the program main loop is attached
recommended that you implement this in phases, as follows.
1. Latch on to the new port: save the port number from Data[1], and make sure all ACKs get sent to this port. This will mean that the transfer completes. You should also make sure the client stops when a packet with less than 512 bytes of data is received. Unless you properly record the source port for Data[1], you have no place to which to send ACK[1]!
2. For each data packet received, write the data to System.out. All status messages should go to System.err, so the two data streams are separate if stdout is redirected. To write to System.out, use System.out.write:
System.out.write(byte[] buf, int offset, int length);
For your program, offset will be 0, buf will typically be dpacket.data(), where dpacket is of type DATA (wumppkt.DATA). The length will be dpacket.size() - wumppkt.DHEADERSIZE (or, equivalently, dg.getLength() - wumppkt.DHEADERSIZE, where dg is a DatagramPacket object).
3. Add sanity checks, for (in order) host/port, packet size, opcode, and block number.
4. Handle timeouts, by retransmitting the most recently sent packet when the elapsed time exceeds a certain amount (4 seconds?). One way to do this is to keep a DatagramPacket variable LastSent, which can either be reqDG or ackDG, and just resend LastSent. Note that the response to an InterruptedIOException, a "true" timeout, will simply be to continue the loop again.
5. Add support for an dallying and error packets. After the client has received the file, dallying means to wait 2.0 - 3.0 timeout intervals (or more) to see if the final data packet is retransmitted. If it is, it means that the final ACK was lost. The dally period gives the client an opportunity to resend the final ACK. Error packets are to be sent to any sender of an apparent data packet that comes from the wrong port.
vanilla Normal transfer
lose Lose ever ...
Programming For Big Data [ Submission DvcScheduleV2.cpp and StaticA.pdfssuser6254411
Programming For Big Data [ Submission: DvcScheduleV2.cpp and StaticArray.h and/or
DynamicArray.h ]
Assignment 5's runtime was too slow -- a couple of minutes or so. It's because of the duplicate-
checking, with over 4 billion compares.
Rewrite the duplicate-checking logic from Assignment 5, using a technique from "Techniques
For Big Data, Reading" to do fewer compares (check the term first then section number for the
duplicate check), and come up with the exact same results as Assignment 5.
You may use your StaticArray.h from Assignment 3 and/or your DynamicArray.h from
assignments 4, but you may not use any STL containers. Submit the H file(s) you use in your
solution, even if there are no changes since your previous work. Your project will be compiled
for grading using the default stack memory size of 1MB.
Since this version is supposed to be fast, there is no longer a need for a progress bar. Include one
if you wish (you may see the run time dramatically changed), or you may leave it out -- your
choice. But if you do have a progress bar, do remember to "flush"...
[Submission] - Submit the driver program (DvcScheduleV2.cpp) with the header files used
The code I wrote for previous assignment:
Main:
#include
#include
#include
#include
#include "DynamicArray.h"
using namespace std;
struct Class
{
string code;
int count;
};
int main()
{
DynamicArray sub;
DynamicArray sem;
DynamicArray sec;
int totalSubjects = 0;
int dup = 0;
int total = 0;
int counter = 0;
bool duplicate;
bool stored;
// For parsing input file
char* token;
char buf[1000];
const char* const tab = "\t";
// Open input file
ifstream fin;
fin.open("dvc-schedule.txt");
if (!fin.good())
{
cout << "I/O error. File can't be found!\n";
return 1; // Exit the program with an error code
}
// Read the input file
while (fin.good())
{
// Progress bar
if (counter % 1000 == 0)
{
cout << '.';
cout.flush();
}
duplicate = false;
stored = false;
string line;
getline(fin, line);
total++; // Total lines processed
strcpy(buf, line.c_str());
if (buf[0] == 0)
continue; // Skip blank lines
// Parse the line
const string term(token = strtok(buf, tab));
const string section(token = strtok(0, tab));
const string course((token = strtok(0, tab)) ? token : "");
const string instructor((token = strtok(0, tab)) ? token : "");
const string whenWhere((token = strtok(0, tab)) ? token : "");
if (course.find('-') == string::npos)
continue;
const string code(course.begin(), course.begin() + course.find('-'));
// Check for duplicates
for (int i = 0; i < counter; i++)
{
if (sem[i] == term && sec[i] == section)
{
dup++;
duplicate = true;
break;
}
}
if (duplicate == true)
continue;
sem[counter] = term;
sec[counter] = section;
counter++;
for (int i = 0; i < totalSubjects; i++)
{
if (sub[i].code == code)
{
sub[i].count++;
stored = true;
break;
}
}
if (stored == true)
continue;
Class y;
y.code = code;
y.count = 1;
sub[totalSubjects] = y;
totalSubjects++;
}
fin.close();
cout << endl;
for (int i = 0; i < totalSubjects; i++)
{
f.
Fault tolerance in general is a challenging topic. Yet we need fault toleranct designs more badly than ever in order to provide robust, highly available systems - especially in times of scale out systems becoming more and more popular.
Unfortunately, most developers do not care too much about a fault tolerant design, either because they are scared by the complexity of the realm or because they do not care enough. One of the problems is that a lack of fault tolerant design does not hurt a lot in development or in QA, but it hurts a lot in production - as Michael Nygard said: "It's all about production!" (at least figuratively).
In this presentation I do *not* try to give a general introduction to fault tolerant design. Instead I pick a few generic case studies that demonstrate the results of missing fault tolerant design, try to sensitize a bit about the production relevance of fault tolerant design and then go along with a few selected patterns. I picked a few patterns which are surprisingly easy to implement and help to mitigate the problems of the former case studies.
This way I try to show two things:
1. A piece of architecture or design as a pattern is not necessarily hard to implement. Sometimes the code is written quicker than it takes to explain the pattern beforehand.
2. Even if fault tolerant design as a general topic might be hard, some parts of it can be implemented very easily and it's more than worth the coding effort if you look how much better your system behaves in production just from adding those few lines of code.
Please help with the below 3 questions, the python script is at the.pdfsupport58
Please help with the below 3 questions, the python script is at the bottom, I cannot get it to work
correctly please indicate where the error is. Thanks
Question-01: Approximately how much longer does it take to do a round-trip ping from/to a
remote machine than from/to localhost? (Note, answers may vary if you are doing the
experiment from your home or from the CS building itself and whether the destination is in
North America or some other continent).
Question-02: Currently, the program calculates the round-trip time for each packet and prints it
out individually. Modify this to correspond to the way the standard ping program works. You
will need to report the minimum, maximum, and average RTTs at the end of all pings from the
client. In addition, calculate the packet loss rate (in percentage).
Question-03: Your program can only detect timeouts in receiving ICMP echo responses. Modify
the Pinger program to parse the ICMP response error codes and display the corresponding error
results to the user. Examples of ICMP response error codes are 0: Destination Network
Unreachable, 1: Destination Host Unreachable.
In this lab, you will gain a better understanding of Internet Control Message Protocol (ICMP).
You will learn to implement a Ping application using ICMP request and reply messages. Ping is
a computer network application used to test whether a particular host is reachable across an IP
network. It is also used to self-test the network interface card of the computer or as a latency test.
It works by sending ICMP echo reply packets to the target host and listening for ICMP echo
reply replies. The "echo reply" is sometimes called a pong. Ping measures the round-trip time,
records packet loss, and prints a statistical summary of the echo reply packets received (the
minimum, maximum, and the mean of the round-trip times and in some versions the standard
deviation of the mean).
Your task is to develop your own Ping application in Python. Your application will use ICMP
but, in order to keep it simple, will not exactly follow the official specification in RFC 1739.
Note that you will only need to write the client side of the program, as the functionality needed
on the server side is built into almost all operating systems. You should complete the Ping
application so that it sends ping requests to a specified host separated by approximately one
second. Each message contains a payload of data that includes a timestamp. After sending each
packet, the application waits up to one second to receive a reply. If one second goes by without a
reply from the server, then the client assumes that either the ping packet or the pong packet was
lost in the network (or that the server is down).
This lab requires you to compose new python code. A skeleton framework is given, you will
need to fill in the blanks.
This lab will require you to build and/or decode a packed binary array of data that is specified by
the ICMP protocol. To assist you, the ICMP protocol speci.
For this project, you must complete the provided partial C++ program.pdffathimaoptical
for c++ programing class Write a program that converts gallons into liters. One gallon=3.785
liters. Display the tittle of \"gallons\" and \"liters\" first. Then the program should display gallons
from 10 to 20 in one-gallon increments and the corresponding liter equaivalents.
Solution
#include
#define GtoL 3.785 //Liters per gallon.
int main()
{
float liters; //liters cornverted by gallons.
int count; //number of gallons to be converted to liters.
count=10; //starting number of gallons to be converted.
liters = count*GtoL; //equation for converting gallons to liters.
printf(\" Convertor - Gallons to Liters\");
while (count<=20)
{
printf(\"%d gallons = %.3f liters\ \",count, liters);
liters = count*GtoL;
count++;
}
return 0;.
Container Orchestration from Theory to PracticeDocker, Inc.
Join Laura Frank and Stephen Day as they explain and examine technical concepts behind container orchestration systems, like distributed consensus, object models, and node topology. These concepts build the foundation of every modern orchestration system, and each technical explanation will be illustrated using Docker’s SwarmKit as a real-world example. Gain a deeper understanding of how orchestration systems like SwarmKit work in practice and walk away with more insights into your production applications.
Daniel Stenberg discusses some of the most common mistakes users are doing when using libcurl and what to do about them.
Video: https://youtu.be/0KfDdIAirSI
So you're done developing your web service, but what will happen when you deploy it to production? In other words: how ready is your service to receive live traffic? In this talk you'll learn what it takes to transition your web service from "done" to "ready" and make sure that you can release to production seamlessly and with confidence.
BUMP implementation in Java.docxThe project is to implemen.docxhartrobert670
BUMP implementation in Java.docx
The project is to implement the BUMP client in java, with window size 1. Here is an overview of the three WUMP protocols (BUMP, HUMP, and CHUMP). Here are the files wumppkt.java, containing the packet format classes, and wclient.java, which contains an outline of the actual program. Only the latter file should be modified; you should not have to make changes to wumppkt.java.
What you are to do is the following, by modifying and extending the wclient.java outline file:
· Implement the basic transfer
· Add all appropriate packet sanity checks: timeouts, host/port, size, opcode, and block number
· Generate output. The transferred file is to be written to System.out. A status message about every packet (listing size and block number) is to be written to System.err. Do not confuse these!
· Terminate after a packet of size less than 512 is received
· Implement an appropriate "dallying" strategy
· send an ERROR packet if it receives a packet from the wrong port. The appropriate ERRCODE in this case is EBADPORT.
An outline of the program main loop is attached
recommended that you implement this in phases, as follows.
1. Latch on to the new port: save the port number from Data[1], and make sure all ACKs get sent to this port. This will mean that the transfer completes. You should also make sure the client stops when a packet with less than 512 bytes of data is received. Unless you properly record the source port for Data[1], you have no place to which to send ACK[1]!
2. For each data packet received, write the data to System.out. All status messages should go to System.err, so the two data streams are separate if stdout is redirected. To write to System.out, use System.out.write:
System.out.write(byte[] buf, int offset, int length);
For your program, offset will be 0, buf will typically be dpacket.data(), where dpacket is of type DATA (wumppkt.DATA). The length will be dpacket.size() - wumppkt.DHEADERSIZE (or, equivalently, dg.getLength() - wumppkt.DHEADERSIZE, where dg is a DatagramPacket object).
3. Add sanity checks, for (in order) host/port, packet size, opcode, and block number.
4. Handle timeouts, by retransmitting the most recently sent packet when the elapsed time exceeds a certain amount (4 seconds?). One way to do this is to keep a DatagramPacket variable LastSent, which can either be reqDG or ackDG, and just resend LastSent. Note that the response to an InterruptedIOException, a "true" timeout, will simply be to continue the loop again.
5. Add support for an dallying and error packets. After the client has received the file, dallying means to wait 2.0 - 3.0 timeout intervals (or more) to see if the final data packet is retransmitted. If it is, it means that the final ACK was lost. The dally period gives the client an opportunity to resend the final ACK. Error packets are to be sent to any sender of an apparent data packet that comes from the wrong port.
vanilla Normal transfer
lose Lose ever ...
Programming For Big Data [ Submission DvcScheduleV2.cpp and StaticA.pdfssuser6254411
Programming For Big Data [ Submission: DvcScheduleV2.cpp and StaticArray.h and/or
DynamicArray.h ]
Assignment 5's runtime was too slow -- a couple of minutes or so. It's because of the duplicate-
checking, with over 4 billion compares.
Rewrite the duplicate-checking logic from Assignment 5, using a technique from "Techniques
For Big Data, Reading" to do fewer compares (check the term first then section number for the
duplicate check), and come up with the exact same results as Assignment 5.
You may use your StaticArray.h from Assignment 3 and/or your DynamicArray.h from
assignments 4, but you may not use any STL containers. Submit the H file(s) you use in your
solution, even if there are no changes since your previous work. Your project will be compiled
for grading using the default stack memory size of 1MB.
Since this version is supposed to be fast, there is no longer a need for a progress bar. Include one
if you wish (you may see the run time dramatically changed), or you may leave it out -- your
choice. But if you do have a progress bar, do remember to "flush"...
[Submission] - Submit the driver program (DvcScheduleV2.cpp) with the header files used
The code I wrote for previous assignment:
Main:
#include
#include
#include
#include
#include "DynamicArray.h"
using namespace std;
struct Class
{
string code;
int count;
};
int main()
{
DynamicArray sub;
DynamicArray sem;
DynamicArray sec;
int totalSubjects = 0;
int dup = 0;
int total = 0;
int counter = 0;
bool duplicate;
bool stored;
// For parsing input file
char* token;
char buf[1000];
const char* const tab = "\t";
// Open input file
ifstream fin;
fin.open("dvc-schedule.txt");
if (!fin.good())
{
cout << "I/O error. File can't be found!\n";
return 1; // Exit the program with an error code
}
// Read the input file
while (fin.good())
{
// Progress bar
if (counter % 1000 == 0)
{
cout << '.';
cout.flush();
}
duplicate = false;
stored = false;
string line;
getline(fin, line);
total++; // Total lines processed
strcpy(buf, line.c_str());
if (buf[0] == 0)
continue; // Skip blank lines
// Parse the line
const string term(token = strtok(buf, tab));
const string section(token = strtok(0, tab));
const string course((token = strtok(0, tab)) ? token : "");
const string instructor((token = strtok(0, tab)) ? token : "");
const string whenWhere((token = strtok(0, tab)) ? token : "");
if (course.find('-') == string::npos)
continue;
const string code(course.begin(), course.begin() + course.find('-'));
// Check for duplicates
for (int i = 0; i < counter; i++)
{
if (sem[i] == term && sec[i] == section)
{
dup++;
duplicate = true;
break;
}
}
if (duplicate == true)
continue;
sem[counter] = term;
sec[counter] = section;
counter++;
for (int i = 0; i < totalSubjects; i++)
{
if (sub[i].code == code)
{
sub[i].count++;
stored = true;
break;
}
}
if (stored == true)
continue;
Class y;
y.code = code;
y.count = 1;
sub[totalSubjects] = y;
totalSubjects++;
}
fin.close();
cout << endl;
for (int i = 0; i < totalSubjects; i++)
{
f.
Daniel Stenberg takes us through how the curl project is doing today. git activity, contributors, committers, mailing list, growth, money and sponsors, his own role and much more. Video here: https://youtu.be/6ueyZGhtj1Q
HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC.
HTTP/3 is the designated name for the coming next version of the protocol that is currently under development within the QUIC working group in the IETF.
HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC.
Daniel Stenberg does a presentation about HTTP/3 and QUIC. Why the new protocols are deemed necessary, how they work, how they change how things are sent over the network and what some of the coming deployment challenges will be.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Key Trends Shaping the Future of Infrastructure.pdf
mastering libcurl part 2
1. @bagder
/* set the options (I left out a few, you will get the point anyway) */
curl_easy_setopt(handles[HTTP_HANDLE], CURLOPT_URL, "https://example.com");
curl_easy_setopt(handles[FTP_HANDLE], CURLOPT_URL, "ftp://example.com");
curl_easy_setopt(handles[FTP_HANDLE], CURLOPT_UPLOAD, 1L);
/* init a multi stack */
multi_handle = curl_multi_init();
/* add the individual transfers */
for(i = 0; i<HANDLECOUNT; i++)
curl_multi_add_handle(multi_handle, handles[i]);
while(still_running) {
CURLMcode mc = curl_multi_perform(multi_handle, &still_running);
if(still_running)
/* wait for activity, timeout or "nothing" */
mc = curl_multi_poll(multi_handle, NULL, 0, 1000, NULL);
if(mc)
break;
}
/* See how the transfers went */
while((msg = curl_multi_info_read(multi_handle, &msgs_left))) {
if(msg->msg == CURLMSG_DONE) {
int idx;
/* Find out which handle this message is about */
for(idx = 0; idx<HANDLECOUNT; idx++) {
int found = (msg->easy_handle == handles[idx]);
if(found)
break;
}
switch(idx) {
case HTTP_HANDLE:
printf("HTTP transfer completed with status %dn", msg->data.result);
break;
case FTP_HANDLE:
printf("FTP transfer completed with status %dn", msg->data.result);
break;
}
}
}
/* remove the transfers and cleanup the handles */
for(i = 0; i<HANDLECOUNT; i++) {
curl_multi_remove_handle(multi_handle, handles[i]);
curl_easy_cleanup(handles[i]);
}
#include <stdio.h>
#include <curl/curl.h>
int main(void)
{
CURL *curl;
CURLcode res;
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "https://example.com");
/* example.com is redirected, so we tell libcurl to follow redirection */
curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1L);
/* Perform the request, res will get the return code */
res = curl_easy_perform(curl);
/* Check for errors */
if(res != CURLE_OK)
fprintf(stderr, "curl_easy_perform() failed: %sn",
curl_easy_strerror(res));
/* always cleanup */
curl_easy_cleanup(curl);
}
return 0;
}
mastering libcurl
November 20, 2023 Daniel Stenberg
more libcurl source code and details in a single video than you ever saw before
part two
3. Setup - November 20, 2023
Live-streamed
Expected to last multiple hours
Recorded
Lots of material never previously presented
There will be LOTS of source code on display
https://github.com/bagder/mastering-libcurl
@bagder
5. mastering libcurl
The project
Getting it
API and ABI
Architecture
API fundamentals
Setting up
@bagder
Transfers
Share API
TLS
Proxies
HTTP
Header API
URL API
WebSocket
Future
Part 1 Part 2
8. Downloads: storing
libcurl delivers data to CURLOPT_WRITEFUNCTION
Pass in a custom pointer to the callback with CURLOPT_WRITEDATA
Defaults to fwrite() to stdout: rarely what you want
The function is called none, one or many times.
Gets 1 to 16kB data per call
The exact amounts depends on factors beyond your control
and vary - do not presume!
@bagder
10. Downloads: compression
no data compression is done by default
HTTP compression is download-only
(HTTP/2 and HTTP/3 header compression is not optional)
For HTTP, set CURLOPT_ACCEPT_ENCODING to “”
For SSH, set CURLOPT_SSH_COMPRESSION to 1L
Beware of decompression bombing
The write callback is called the same way
@bagder
12. Downloads: multiple
Reuse easy handles and call curl_easy_perform() again for serial
Add multiple easy handles to a multi handle for parallel
Multi handle transfers can be made to multiplex (more details later)
Run multiple instances in separate threads
@bagder
13. Downloads: maximum file size
A default file transfer has no size nor time limit
CURLOPT_MAXFILESIZE_LARGE
CURLOPT_TIMEOUT_MS
Or limit yourself in the write callback
or in the progress callback
@bagder
14. Downloads: resume and ranges
libcurl can continue a previous transfer or get a partial resource
CURLOPT_RESUME_FROM_LARGE
CURLOPT_RANGE
@bagder
range.c
int main(void)
{
CURL *curl;
CURLcode res = CURLE_OK;
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "https://curl.se/typo.html");
curl_easy_setopt(curl, CURLOPT_RANGE, “200-999”);
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
}
return (int)res;
}
15. Downloads: buffer size
CURLOPT_BUFFERSIZE is 16kB by default
Allocated and associated with the easy handle
10MB maximum
May affect maximum possible transfer speed
@bagder
16. Uploads: providing data
CURLOPT_READFUNCTION
Returning error stops transfer
@bagder
int main(void)
{
CURL *curl;
CURLcode res;
struct WriteThis wt;
wt.readptr = data;
wt.sizeleft = strlen(data);
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "https://curl.se");
curl_easy_setopt(curl, CURLOPT_POST, 1L);
curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_cb);
curl_easy_setopt(curl, CURLOPT_READDATA, &wt);
curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L);
res = curl_easy_perform(curl);
if(res)
fprintf(stderr, "curl_easy_perform() failed: %sn",
curl_easy_strerror(res));
curl_easy_cleanup(curl);
}
return 0;
}
read-callback.c
struct WriteThis {
const char *readptr;
size_t sizeleft;
};
static size_t read_cb(char *dest, size_t size, size_t nmemb,
void *userp)
{
struct WriteThis *wt = (struct WriteThis *)userp;
size_t buffer_size = size*nmemb;
if(wt->sizeleft) {
/* [some code left out] */
memcpy(dest, wt->readptr, copy_this_much);
return copy_this_much; /* we copied this many bytes */
}
return 0; /* no more data left to deliver */
}
read-callback.c
17. Uploads: providing data
Two more common ways, details follow later in the HTTP section
CURLOPT_POSTFIELDS
CUROPT_MIMEPOST
@bagder
18. Uploads: multiple uploads
Of course you can mix uploads and downloads
The same rules apply for multiple uploads as for multiple downloads
@bagder
19. Uploads: buffer size
CURLOPT_UPLOAD_BUFFERSIZE is 64kB by default
Allocated when needed and associated with the easy handle
2MB maximum
May affect maximum possible transfer speed
@bagder
22. multiplexing
HTTP/2 or HTTP/3
transfers added to a multi handle
CURLMOPT_PIPELINING
There is a max number of streams
Connections can GOAWAY
Multiplexing (or not) is done transparently
@bagder
same
[ multi handle +
HTTP(S) +
port number +
host name +
HTTP version ]
==
multiplexing possible
23. Transfer controls: stop
Transfers continue until done (success or fail)
There are several timeouts
Return error from a callback
Careful with threads
With multi interface, remove the easy handle from the multi
@bagder
24. Transfer controls: stop slow transfers
By default, a transfer can stall for any
period without that being an error.
Stop transfer if below N bytes/sec
during M seconds:
N - CURLOPT_LOW_SPEED_LIMIT
M - CURLOPT_LOW_SPEED_TIME
@bagder
lowspeed.c
int main(void)
{
CURL *curl;
CURLcode res = CURLE_OK;
curl = curl_easy_init();
if(curl) {
/* abort if slower than 30 bytes/sec during 60 seconds */
curl_easy_setopt(curl, CURLOPT_LOW_SPEED_TIME, 60L);
curl_easy_setopt(curl, CURLOPT_LOW_SPEED_LIMIT, 30L);
curl_easy_setopt(curl, CURLOPT_URL, "https://curl.se/");
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
}
return (int)res;
}
25. Transfer controls: rate limit
Do not transfer data faster than N
bytes/sec
Separate options for receiving and
sending
attempts to keep the average speed
below the given threshold over a
period time
@bagder
maxspeed.c
int main(void)
{
CURL *curl;
CURLcode res = CURLE_OK;
curl = curl_easy_init();
if(curl) {
curl_off_t maxrecv = 31415;
curl_off_t maxsend = 67954;
curl_easy_setopt(curl, CURLOPT_MAX_RECV_SPEED_LARGE, maxrecv);
curl_easy_setopt(curl, CURLOPT_MAX_SEND_SPEED_LARGE, maxsend);
curl_easy_setopt(curl, CURLOPT_URL, "https://curl.se/");
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
}
return (int)res;
}
26. Transfer controls: progress meter
libcurl can output a progress meter
on stderr
Disabled by default
Awkward reverse option:
CURLOPT_NOPROGRESS - set to 1L to
disable progress meter
Return error to stop transfer
@bagder
meter.c
int main(void)
{
CURL *curl;
CURLcode res = CURLE_OK;
curl = curl_easy_init();
if(curl) {
/* enable progress meter */
curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 0L);
curl_easy_setopt(curl, CURLOPT_URL, "https://curl.se/");
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
}
return (int)res;
}
27. Transfer controls: progress callback
keep track of transfer progress yourself
also called on idle with easy interface
@bagder
progress-cb.c
int main(void)
{
CURL *curl;
CURLcode res = CURLE_OK;
curl = curl_easy_init();
if(curl) {
struct progress data = {curl, 1000000 };
curl_easy_setopt(curl, CURLOPT_XFERINFODATA, &data);
curl_easy_setopt(curl, CURLOPT_XFERINFOFUNCTION,
progress_cb);
curl_easy_setopt(curl, CURLOPT_URL, "https://curl.se/");
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
}
return (int)res;
}
progress-cb.c
struct progress {
CURL *handle;
size_t size;
};
static size_t progress_cb(void *clientp, curl_off_t dltotal,
curl_off_t dlnow, curl_off_t ultotal,
curl_off_t ulnow)
{
struct progress *memory = clientp;
/* use the values */
return 0; /* all is good */
}
28. Timeouts
By default, libcurl typically has no or very liberal timeouts
You might want to narrow things down
CURLOPT_TIMEOUT[_MS]
CURLOPT_CONNECTTIMEOUT[_MS]
Make your own with the progress callback
@bagder
29. post transfer meta-data
curl_easy_getinfo() returns info about the previous transfer
There are 71 different CURLINFO_* options
See their man pages for details
@bagder
getinfo.c
int main(void)
{
CURL *curl;
CURLcode res = CURLE_OK;
curl = curl_easy_init();
if(curl) {
char *ct;
char *ip;
curl_off_t dlsize;
curl_easy_setopt(curl, CURLOPT_URL, "https://curl.se/");
curl_easy_perform(curl);
curl_easy_getinfo(curl, CURLINFO_CONTENT_TYPE, &ct);
curl_easy_getinfo(curl, CURLINFO_SIZE_DOWNLOAD_T,
&dlsize);
curl_easy_getinfo(curl, CURLINFO_PRIMARY_IP, &ip);
curl_easy_cleanup(curl);
}
return (int)res;
}
ACTIVESOCKET, APPCONNECT_TIME, APPCONNECT_TIME_T, CAINFO,
CAPATH, CERTINFO, CONDITION_UNMET, CONNECT_TIME,
CONNECT_TIME_T, CONN_ID, CONTENT_LENGTH_DOWNLOAD,
CONTENT_LENGTH_DOWNLOAD_T, CONTENT_LENGTH_UPLOAD,
CONTENT_LENGTH_UPLOAD_T, CONTENT_TYPE, COOKIELIST,
EFFECTIVE_METHOD, EFFECTIVE_URL, FILETIME, FILETIME_T,
FTP_ENTRY_PATH, HEADER_SIZE, HTTPAUTH_AVAIL, HTTP_CONNECTCODE,
HTTP_VERSION, LASTSOCKET, LOCAL_IP, LOCAL_PORT, NAMELOOKUP_TIME,
NAMELOOKUP_TIME_T, NUM_CONNECTS, OS_ERRNO, PRETRANSFER_TIME,
PRETRANSFER_TIME_T, PRIMARY_IP, PRIMARY_PORT, PRIVATE, PROTOCOL,
PROXYAUTH_AVAIL, PROXY_ERROR, PROXY_SSL_VERIFYRESULT,
REDIRECT_COUNT, REDIRECT_TIME, REDIRECT_TIME_T, REDIRECT_URL,
REFERER, REQUEST_SIZE, RESPONSE_CODE, RETRY_AFTER,
RTSP_CLIENT_CSEQ, RTSP_CSEQ_RECV, RTSP_SERVER_CSEQ,
RTSP_SESSION_ID, SCHEME, SIZE_DOWNLOAD, SIZE_DOWNLOAD_T,
SIZE_UPLOAD, SIZE_UPLOAD_T, SPEED_DOWNLOAD, SPEED_DOWNLOAD_T,
SPEED_UPLOAD, SPEED_UPLOAD_T, SSL_ENGINES, SSL_VERIFYRESULT,
STARTTRANSFER_TIME, STARTTRANSFER_TIME_T, TLS_SESSION,
TLS_SSL_PTR, TOTAL_TIME, TOTAL_TIME_T, XFER_ID
30. threading
Never share curl handles simultaneously across multiple threads
Use separate ones in separate threads fine
Share partial data between handles in separate threads with the share API
Multi-threaded is a sensible option if CPU-bound
All libcurl calls work in the same thread (but... )
@bagder
31. error handling
Always check return codes from libcurl function calls
CURLOPT_ERRORBUFFER is your friend
Your application decides and acts on retry strategies
@bagder
32. convert curl command lines to libcurl source code embryos
excellent initial get-started step
--libcurl
@bagder
$ curl -H "foo: bar" https://curl.se/ libcurl dashdash.c
[lots of HTML output]
dashdash.c
int main(int argc, char *argv[])
{
CURLcode ret;
CURL *hnd;
struct curl_slist *slist1;
slist1 = NULL;
slist1 = curl_slist_append(slist1, "foo: bar");
hnd = curl_easy_init();
curl_easy_setopt(hnd, CURLOPT_BUFFERSIZE, 102400L);
curl_easy_setopt(hnd, CURLOPT_URL, "https://curl.se/");
curl_easy_setopt(hnd, CURLOPT_NOPROGRESS, 1L);
curl_easy_setopt(hnd, CURLOPT_HTTPHEADER, slist1);
curl_easy_setopt(hnd, CURLOPT_USERAGENT, "curl/8.4.0");
curl_easy_setopt(hnd, CURLOPT_MAXREDIRS, 50L);
curl_easy_setopt(hnd, CURLOPT_HTTP_VERSION, (long)CURL_HTTP_VERSION_2TLS);
curl_easy_setopt(hnd, CURLOPT_FTP_SKIP_PASV_IP, 1L);
curl_easy_setopt(hnd, CURLOPT_TCP_KEEPALIVE, 1L);
/* Here is a list of options the curl code used that cannot get generated
as source easily. You may choose to either not use them or implement
them yourself.
34. share data between handles
(some) caches and state are kept in the easy handle
transfers using different easy handles might not be entirely independent
the share API:
1. create a share object
2. decide what data the object should hold
3. specify which easy handles should use the object
4. one share object per transfer
@bagder
36. share data between handles
Share object A: share cookies and DNS cache
Share object B: share connection cache
Let transfers share A and B as you wish
@bagder
easy transfer 1
easy transfer 2
easy transfer 3
easy transfer 4
easy transfer 5
share object B
share object A
40. enable TLS
For communication with the peer
TLS is implied for “S-protocols”: HTTPS, FTPS, IMAPS, POP3S, SMTPS etc
Use the correct scheme in the URL
For FTP, IMAP, POP3, SMTP, LDAP etc: CURLOPT_USE_SSL is needed
@bagder
/* require TLS or fail */
curl_easy_setopt(curl, CURLOPT_USE_SSL, (long)CURLUSESSL_ALL);
41. enable TLS for the proxy
the proxy connection is controlled separately
use HTTPS:// proxy for best privacy
@bagder
curl_easy_setopt(curl, CURLOPT_PROXY, “https://proxy.example:8081”);
42. ciphers
libcurl defaults to sensible, modern and safe ciphers
assuming you use modern TLS library
CURLOPT_SSL_CIPHER_LIST
CURLOPT_TLS13_CIPHERS
again, the proxy config is set and managed separately
@bagder
43. verifying server certificates
libcurl verifies TLS server certificates by default (CURLOPT_SSL_VERIFYPEER)
using the default CA store
...or the one you point to (CURLOPT_CAINFO)
craft your own verification with CURLOPT_SSL_CTX_FUNCTION
https proxy config is set and managed separately
never disable server certificate verification in prodution
@bagder
https://curl.se/docs/caextract.html
44. “blob” alternatives
For your systems where you don’t have a file system or want to avoid using files
CURLOPT_CAINFO_BLOB
CURLOPT_ISSUERCERT_BLOB
CURLOPT_PROXY_CAINFO_BLOB
CURLOPT_PROXY_ISSUERCERT_BLOB
CURLOPT_PROXY_SSLCERT_BLOB
CURLOPT_PROXY_SSLKEY_BLOB
CURLOPT_SSLCERT_BLOB
CURLOPT_SSLKEY_BLOB
@bagder
#def ne CURL_BLOB_COPY 1
#def ne CURL_BLOB_NOCOPY 0
struct curl_blob {
void *data;
size_t len;
unsigned int flags;
};
struct curl_blob blob = { bufptr, buflen, CURL_BLOB_COPY };
curl_easy_setopt(curl, CURLOPT_CAINFO_BLOB, &blob);
45. TLS backend(s)
libcurl supports more than one TLS backend built-in
one of backends is set as default
set desired TLS backend first thing, and only then
curl_global_sslset()
@bagder
BearSSL
AWS-LC
GnuTLS
mbedSSL
OpenSSL
Schannel
wolfSSL
Secure Transport
rustls
BoringSSL
libressl
AmiSSL
46. SSLKEYLOGFILE
@bagder
TLS transfers are encrypted
encrypted transfers can’t be snooped upon
unless we can extract the secrets in run-time
set the environment variable named SSLKEYLOGFILE to a file name
tell wireshark to read secrets from that file name
then run your libcurl using application as normal
(also works with browsers)
50. a proxy is an intermediary
a server application that acts as an intermediary between a client requesting a
resource and the server providing that resource
proxy website
Network A Network B
client
@bagder
55. HTTP versions: CURLOPT_HTTP_VERSION
libcurl supports HTTP/0.9, HTTP/1.0, HTTP/1.1, HTTP/2 and HTTP/3
Generally: you don’t need to care
Different over the wire, made to look similar for the application
HTTP/0.9 must be enabled with CURLOPT_HTTP09_ALLOWED
HTTP/1.0 with CURL_HTTP_VERSION_1_0
HTTP/1.1 is a general default or CURL_HTTP_VERSION_1_1
HTTP/2 is default over HTTPS (CURL_HTTP_VERSION_2TLS), or used for clear text
HTTP as well with CURL_HTTP_VERSION_2_0
HTTP/3 is asked for with CURL_HTTP_VERSION_3 or CURL_HTTP_VERSION_3ONLY
@bagder
56. Kernel space
HTTP/1, HTTP/2, HTTP/3
IPv4 / IPv6
TLS 1.2+
UDP
TCP
connections
HTTP/1
@bagder
QUIC
streams
connections
TLS 1.3
HTTP Semantics
HTTP/3
header compression
server push
User
space
HTTP/2
streams
header compression
server push
57. Response code
Every HTTP response has a three-digit response code
1xx informational response
2xx success
3xx redirection
4xx client errors
5xx server errors
libcurl returns CURLE_OK for a successful transfer
Independently of response code!
@bagder
59. Redirects
HTTP often redirects the client
HTTP response code 30x + a
Location: header
By default libcurl does not follow
redirects
CURLOPT_FOLLOWLOCATION
@bagder
follow.c
#include <curl/curl.h>
int main(void)
{
CURL *curl;
CURLcode res = CURLE_OK;
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "https://curl.se/typo.html");
curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1L);
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
}
return (int)res;
}
60. HTTP METHOD
GET is used by default
POST is used for CURLOPT_POST, CURLPOST_POSTFIELDS or CURLOPT_MIMEPOST
PUT is used for CURLOPT_UPLOAD
Change the method with CURLOPT_CUSTOMREQUEST
@bagder
WARNING
61. HTTP POST
CURLOPT_POSTFIELDS - provide data in a buffer
CURLOPT_POSTFIELDSIZE_LARGE - if not zero terminated
CURLOPT_COPYPOSTFIELDS - if you want libcurl to copy
CURLOPT_READFUNCTION - as seen on slide 16
CURLOPT_POST - tell it is a POST
CURLOPT_MIMEPOST - structured data in one or many “parts”, see next slide
@bagder
63. HTTP multipart formpost
This is a POST sending data in a special multipart format
Content-Type multipart/form-data
The data is sent as a series of “parts”, one or more
Each part has a name, separate headers, file name and more
Each part is separated by a “mime boundary”
@bagder
64. The curl MIME API
CURLOPT_MIMEPOST wants a curl_mime * argument.
curl_mime * is a handle to a complete “multipart”
curl_mime *multipart = curl_mime_init(curl_handle);
Then add parts with curl_mime_addpart(multipart);
curl_mimepart *part = curl_mime_addpart(multipart);
Each part has a set of properties, name and data being the key ones.
curl_mime_name(part, "name");
curl_mime_data(part, "daniel", CURL_ZERO_TERMINATED);
Then you can add another part
@bagder
65. More HTTP mimepost
We can insert part data from a file with curl_mime_filedata() or from a callback with
curl_mime_data_cb()
Provide a set of headers for a part with curl_mime_headers()
@bagder
66. HTTP request headers
A HTTP request is a method, path and a
sequence of headers
libcurl inserts the set of headers it thinks
are necessary, a bare minimum
CURLOPT_HTTPHEADER lets the
application add, change, remove headers
@bagder
mod-headers.c
int main(void)
{
CURL *curl;
CURLcode res = CURLE_OK;
struct curl_slist *list = NULL;
curl = curl_easy_init();
if(curl) {
/* add a custom one */
list = curl_slist_append(list, "Shoesize: 10");
/* remove an internally generated one */
list = curl_slist_append(list, "Accept:");
/* change an internally generated one */
list = curl_slist_append(list, "Host: curl.example");
/* provide one without content */
list = curl_slist_append(list, "Empty;");
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, list);
curl_easy_setopt(curl, CURLOPT_URL, "https://curl.se/");
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
}
return (int)res;
}
67. Conditionals
only get the resource if newer (or older)
than this
CURLOPT_TIMECONDITION +
CURLOPT_TIMEVALUE_LARGE
Also works with FTP, FILE and RTSP
@bagder
ifmodified.c
int main(void)
{
CURL *curl;
CURLcode res = CURLE_OK;
curl = curl_easy_init();
if(curl) {
curl_off_t when = 1698793200; /* November 1, 2023 */
curl_easy_setopt(curl, CURLOPT_URL, "https://curl.se/index.html");
curl_easy_setopt(curl, CURLOPT_TIMEVALUE_LARGE, when);
curl_easy_setopt(curl, CURLOPT_TIMECONDITION,
(long)CURL_TIMECOND_IFMODSINCE);
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
}
return (int)res;
}
68. ranges
Ask for a piece of a remote resource
The server may ignore the ask 🙁
@bagder
range.c
int main(void)
{
CURL *curl;
CURLcode res = CURLE_OK;
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "https://curl.se/typo.html");
curl_easy_setopt(curl, CURLOPT_RANGE, “200-999”);
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
}
return (int)res;
}
70. Cookies
Cookies are name=value pairs for specific destinations
For a specific domain + path
When enabling the “cookie engine”, cookies are held in memory
When cookies are enabled, they are received and sent following cookies rules
INPUT:
CURLOPT_COOKIEFILE - read cookies from this file
CURLOPT_COOKIESESSION - consider this a new cookie session
CURLOPT_COOKIE - send only this specific cookie
OUTPUT:
CURLOPT_COOKIEJAR - write cookies to this file
@bagder
72. alt-svc
Alt-Svc: is a response header
“this service is available on that host for the next N seconds”
also says which HTTP versions it supports there
www.example.com:443 can then be provided by another.example.org:8765
The original way to bootstrap into sing HTTP/3
in its nature for “the next connect attempt”
this cache is kept in memory
can be saved to and load from file
specify which HTTP versions you want to “follow”
@bagder
Alt-svc
data
73. alt-svc controls
CURLOPT_ALTSVC specifies the alt-svc cache file name
CURLOPT_ALTSVC_CTRL specifies which protocols to allow
@bagder
Alt-svc
data
altsvc.c
int main(void)
{
CURL *curl;
CURLcode res = CURLE_OK;
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "https://curl.se/");
curl_easy_setopt(curl, CURLOPT_ALTSVC_CTRL,
CURLALTSVC_H1|CURLALTSVC_H2|CURLALTSVC_H3);
curl_easy_setopt(curl, CURLOPT_ALTSVC, "altsvc-cache.txt");
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
}
return (int)res;
}
74. HSTS
Strict-Transport-Security: is a response header
“access this host name only over HTTPS for the next N seconds”
this cache is kept in memory
can be saved to and load from file
makes subsequent requests avoid clear text transfers
@bagder
HSTS data
75. HSTS control
CURLOPT_HSTS specifies the cache
file name
CURLOPT_HSTS_CTRL controls
behavior: enable and readonly
With CURLOPT_HSTSREADFUNCTION
and CURLOPT_HSTSWRITEFUNCTION
you can change storage.
@bagder
HSTS data
hsts.c
int main(void)
{
CURL *curl;
CURLcode res = CURLE_OK;
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "https://curl.se/");
curl_easy_setopt(curl, CURLOPT_ALTSVC_CTRL,
CURLALTSVC_H1|CURLALTSVC_H2|CURLALTSVC_H3);
curl_easy_setopt(curl, CURLOPT_ALTSVC, "altsvc-cache.txt");
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
}
return (int)res;
}
76. HTTP/2
HTTP version is transparent to the application
libcurl defaults to negotiating HTTP/2 for HTTPS
When using the multi interface, libcurl can multiplex HTTP/2
CURLOPT_STREAM_DEPENDS, CURLOPT_STREAM_DEPENDS_E and
CURLOPT_STREAM_WEIGHT
@bagder
77. HTTP/3
HTTP/3 needs to be asked for
HTTP/3 is only used over HTTPS
CURL_HTTP_VERSION_3 means try HTTP/3 but fall back gracefully if needed
CURL_HTTP_VERSION_3ONLY does not allow falling back
When using the multi interface, libcurl can multiplex HTTP/3
@bagder
83. URL API : basics
RFC 3986+
This is not the WHATWG URL Specification
Parse, change, generate URLs
The CURLU * handle represents a URL
Create a URL handle with curl_url()
Or duplicate an existing with curl_url_dup()
@bagder
84. URL API: set and get parts
You set URL parts to add them to the handle
You get URL parts to extract them from the handle
@bagder
85. URL API: parts
CURLUPART_[name]
URL is the entire thing
@bagder
scheme://user:password@host:1234/path?query#fragment
URL
SCHEME
USER
PASSWORD
OPTIONS
HOST
ZONEID
PORT
PATH
QUERY
FRAGMENT
86. Parse a complete URL
By setting the URL in the handle
Get it from the handle to figure out what it holds
@bagder
url-set.c
int main(void)
{
CURLUcode rc;
CURLU *url = curl_url();
rc = curl_url_set(url, CURLUPART_URL,
"https://example.com", 0);
if(!rc) {
char *norm;
rc = curl_url_get(url, CURLUPART_URL, &norm, 0);
if(!rc norm) {
printf("URL: %sn", norm);
curl_free(norm);
}
}
curl_url_cleanup(url);
return 0;
}
87. Set URL components
By setting the components in the handle
Get the full URL from the handle to figure out what it holds
@bagder
url-set-parts.c
int main(void)
{
char *output;
CURLUcode rc;
CURLU *url = curl_url();
curl_url_set(url, CURLUPART_SCHEME, "https", 0);
curl_url_set(url, CURLUPART_HOST, "curl.se", 0);
curl_url_set(url, CURLUPART_PORT, "443", 0);
curl_url_set(url, CURLUPART_PATH, "/index.html", 0);
rc = curl_url_get(url, CURLUPART_URL, &output, 0);
if(!rc output) {
printf("URL: %sn", output);
curl_free(output);
}
curl_url_cleanup(url);
return 0;
}
$ gcc url-set-parts.c -lcurl
$ ./a.out
URL: https://curl.se:443/index.html
88. Extract URL components
First we set a URL
Get components from the URL
@bagder
url-get-parts.c
int main(void)
{
char *port;
char *query;
CURLU *url = curl_url();
curl_url_set(url, CURLUPART_URL,
"https://example.com:8080/donkey.php?age=7", 0);
curl_url_get(url, CURLUPART_PORT, &port, 0);
curl_url_get(url, CURLUPART_QUERY, &query, 0);
if(port) {
printf("Port: %sn", port);
curl_free(port);
}
if(query) {
printf("Query: %sn", query);
curl_free(query);
}
curl_url_cleanup(url);
return 0;
}
$ gcc url-get-parts.c -lcurl
$ ./a.out
Port: 8080
Query: age=7
89. Redirect
Set an absolute URL
Then set a relative URL
@bagder
redirect.c
int main(void)
{
char *dest;
CURLUcode rc;
CURLU *url = curl_url();
curl_url_set(url, CURLUPART_URL,
"https://curl.se/this/cool/path/here.html", 0);
curl_url_set(url, CURLUPART_URL, " / /second/take/moo.jpg", 0);
rc = curl_url_get(url, CURLUPART_URL, &dest, 0);
if(!rc dest) {
printf("URL: %sn", dest);
curl_free(dest);
}
curl_url_cleanup(url);
return 0;
}
$ gcc redirect.c -lcurl
$ ./a.out
URL: https://curl.se/this/second/take/moo.jpg
95. How to dig deeper
curl is the accumulated results and experiences from 25 years of improvements
almost 500 man pages
Everything curl
source code
ask the community!
@bagder
97. Going next?
curl is 25 years old
curl has been growing and developed its entire lifetime
curl development speed is increasing
the Internet does not stop or slow down
protocols and new ways of doing Internet transfers keep popping up
new versions, new systems, new concepts and new ideas keep coming
there is no slowdown in sight
reasonably, curl will keep develop
curl will keep expanding, get new features, get taught new things
we, the community, make it do what we think it should do
you can affect what’s next for curl
@bagder