Lecture at an event "Fourth DwB Training Course: Working with data from official statistics in Europe –particularly the European Union Labour Force Survey (EU-LFS)", Data without Boundaries (DwB), Ljubljana, Slovenija.
Udeleženci so se seznanili s politikami, načeli in koristmi, povezanimi z odprtim dostopom do raziskovalnih podatkov.
Povezane vsebine: http://www.adp.fdv.uni-lj.si/blog/2015/blog/odprta-dostopnost-recenziranih-publikacij-in-raziskovalnih-podatkov-dolocila-financerjev-in-prakticna-izvedba/#axzz43uNG9ipW
http://www.adp.fdv.uni-lj.si/konference-in-dogodki/SIDIH_2015_5/
Predstavitev je bila namenjena predstavitvi Arhiva družboslovnih podatkov kot infrastrukturnega centra.
povezana stran: http://www.uni-lj.si/raziskovalno_in_razvojno_delo/mreza_raziskovalnih_infrastrukturnih_centrov/
V prispevku smo predstavili nekaj pobud, ki poudarjajo vprašanja dolgotrajnega digitalnega shranjevanja in z njim povezanega zaupanja, ki ga je mogoče preveriti s formalnimi postopki certificiranja procesov ter organizacijskimi in upravljavskimi postopki organizacij, ki delujejo na tem področju, ter nekaj pomembnejšimi standardi certificiranja, ki so v uporabi zlasti v družboslovju in humanistiki.
Prezentacija je bila namenjena predavanju študentom Filozofske fakultete v Ljubljani. Pokazali smo kako preko spletne strani ADP priti do kakovostnih raziskovalnih podatkov v domačih in mednarodnih podatkovnih katalogih. V drugem delu pa so se usposabljali za uporabo pregledovalnika Nesstar, ki med drugim omogoča iskanje, pregledovanje in analiziranje podatkov neposredno na spletu.
Povezano gradivo: http://www.adp.fdv.uni-lj.si/adp_izobrazevanje_avg2014/presentations/Vodic%20po%20orodjih%20ADP.pdf
From its beginning ADP invested effort into development and strengthening of relationship with researchers - data depositors. The effort was concentrated on the promotion of data services and presenting advantages of data depositing. On the other hand we invested into assuring high quality research: by selection and evaluation of research and its data. Since the policies of research funding agencies are changing (due to requirement of international community and H2020) in the way that open access to research data is becoming a requirement and researchers are becoming more and more motivated to deposit their data, the archives are approaching new challenges. The aim of this presentation is to show (1) the development of the relationship ''ADP – data depositors'' from the beginning , (2) to evaluate the current situation (3) and to establish what are the challenges we will have to deal with.
Related site: http://www.data-archive.ac.uk/news-events/events.aspx?id=3888
V sodelovanju med Arhivom družboslovnih podatkov in Statističnim uradom Republike Slovenije so bili od leta 2012 pripravljeni številni podatki za raziskovalce in študente. Na področju trga dela so na voljo mikropodatki in metapodatki Ankete o delovni sili za obdobje od leta 2001 do leta 2011, poleg tega pa še anonimizirana mikropodatkovna datoteka Ankete o delovni sili 2010. Pripravljena je bila z namenom širše distribucije mikropodatkov manj zahtevnim uporabnikom, med katere prištevamo tudi dodiplomske in podiplomske študente, ki si želijo enostavnejšega dostopa do mikropodatkov uradne statistike. Datoteka je bila skrbno pripravljena z najnovejšimi anonimizacijskimi metodami in tehnikami, zato se je ohranila visoka kakovost mikropodatkov z ohranitvijo enakih deskriptivnih rezultatov pri ključnih spremenljivkah. Zato je primerna tudi za bolj zahtevno raziskovanje. Poleg anonimizirane datoteke ADS 2010 pa ADP distribuira tudi druge mikropodatke s področja trga dela ter sorodnih disciplin: Anketa o delovni sili 1997-2000, Upravljanje človeških virov (2004-2008), Raziskava o ekonomskih migracijah in delavcih migrantih 2011, Analiza stanja psihosocialnih tveganj na delovnih mestih v mikro, malih in srednje velikih podjetjih, 2011 itd.
Predavanje na dogodku "Vloga knjižničarjev pri odpiranju raziskovalnih podatkov in vodenju bibliografij raziskovalcev" na Fakulteti za družbene vede v Ljubljani.
Udeleženci so se seznanili s politikami, načeli in koristmi, povezanimi z odprtim dostopom do raziskovalnih podatkov.
Povezane vsebine: http://www.adp.fdv.uni-lj.si/blog/2015/blog/odprta-dostopnost-recenziranih-publikacij-in-raziskovalnih-podatkov-dolocila-financerjev-in-prakticna-izvedba/#axzz43uNG9ipW
http://www.adp.fdv.uni-lj.si/konference-in-dogodki/SIDIH_2015_5/
Predstavitev je bila namenjena predstavitvi Arhiva družboslovnih podatkov kot infrastrukturnega centra.
povezana stran: http://www.uni-lj.si/raziskovalno_in_razvojno_delo/mreza_raziskovalnih_infrastrukturnih_centrov/
V prispevku smo predstavili nekaj pobud, ki poudarjajo vprašanja dolgotrajnega digitalnega shranjevanja in z njim povezanega zaupanja, ki ga je mogoče preveriti s formalnimi postopki certificiranja procesov ter organizacijskimi in upravljavskimi postopki organizacij, ki delujejo na tem področju, ter nekaj pomembnejšimi standardi certificiranja, ki so v uporabi zlasti v družboslovju in humanistiki.
Prezentacija je bila namenjena predavanju študentom Filozofske fakultete v Ljubljani. Pokazali smo kako preko spletne strani ADP priti do kakovostnih raziskovalnih podatkov v domačih in mednarodnih podatkovnih katalogih. V drugem delu pa so se usposabljali za uporabo pregledovalnika Nesstar, ki med drugim omogoča iskanje, pregledovanje in analiziranje podatkov neposredno na spletu.
Povezano gradivo: http://www.adp.fdv.uni-lj.si/adp_izobrazevanje_avg2014/presentations/Vodic%20po%20orodjih%20ADP.pdf
From its beginning ADP invested effort into development and strengthening of relationship with researchers - data depositors. The effort was concentrated on the promotion of data services and presenting advantages of data depositing. On the other hand we invested into assuring high quality research: by selection and evaluation of research and its data. Since the policies of research funding agencies are changing (due to requirement of international community and H2020) in the way that open access to research data is becoming a requirement and researchers are becoming more and more motivated to deposit their data, the archives are approaching new challenges. The aim of this presentation is to show (1) the development of the relationship ''ADP – data depositors'' from the beginning , (2) to evaluate the current situation (3) and to establish what are the challenges we will have to deal with.
Related site: http://www.data-archive.ac.uk/news-events/events.aspx?id=3888
V sodelovanju med Arhivom družboslovnih podatkov in Statističnim uradom Republike Slovenije so bili od leta 2012 pripravljeni številni podatki za raziskovalce in študente. Na področju trga dela so na voljo mikropodatki in metapodatki Ankete o delovni sili za obdobje od leta 2001 do leta 2011, poleg tega pa še anonimizirana mikropodatkovna datoteka Ankete o delovni sili 2010. Pripravljena je bila z namenom širše distribucije mikropodatkov manj zahtevnim uporabnikom, med katere prištevamo tudi dodiplomske in podiplomske študente, ki si želijo enostavnejšega dostopa do mikropodatkov uradne statistike. Datoteka je bila skrbno pripravljena z najnovejšimi anonimizacijskimi metodami in tehnikami, zato se je ohranila visoka kakovost mikropodatkov z ohranitvijo enakih deskriptivnih rezultatov pri ključnih spremenljivkah. Zato je primerna tudi za bolj zahtevno raziskovanje. Poleg anonimizirane datoteke ADS 2010 pa ADP distribuira tudi druge mikropodatke s področja trga dela ter sorodnih disciplin: Anketa o delovni sili 1997-2000, Upravljanje človeških virov (2004-2008), Raziskava o ekonomskih migracijah in delavcih migrantih 2011, Analiza stanja psihosocialnih tveganj na delovnih mestih v mikro, malih in srednje velikih podjetjih, 2011 itd.
Predavanje na dogodku "Vloga knjižničarjev pri odpiranju raziskovalnih podatkov in vodenju bibliografij raziskovalcev" na Fakulteti za družbene vede v Ljubljani.
Predavanje za študente Filozofske fakultete Univerze v Ljubljani. Seznanili so se s konceptom sekundarne analize in metapodatkov. Pokazali smmo, kako preko spletne strani ADP priti do kakovostnih raziskovalnih podatkov v domačih in mednarodnih podatkovnih katalogih. Obenem so se seznanili tudi z uporabo pregledovalnika Nesstar, ki med drugim omogoča iskanje, pregledovanje in analiziranje podatkov neposredno na spletu. V drugem delu so študentje na podlagi podanih navodil izvedli samostojno vajo, v kateri so se seznanili z opisom raziskave, citiranjem in osnovnimi statističnimi analizami.
Dodatna gradiva: http://www.adp.fdv.uni-lj.si/adp_izobrazevanje_avg2014/presentations/Vodic%20po%20orodjih%20ADP.pdf
V prvem delu se udeleženci na kratko seznanijo z zgodovino "osveščenega pristanka" ter s pomenom dokumenta za načrtovanje raziskovanja ter za nadaljnje ravnanje z raziskovalnimi podatki. V nadaljevanju se seznanijo s pojmom občutljivi podatki ter z vprašanji, ki pomenijo ustrezno in kakovostno ravnanje z občutljivimi podatki.
Objava po dogodku: http://www.adp.fdv.uni-lj.si/blog/2014/blog/seminar-za-raziskovalce-priprava-raziskovalnih-podatkov-za-odprti-dostop/#axzz43uNG9ipW
Video: http://videolectures.net/adpseminar2014_brvar_prakticnidel/
Predavanje na dogodku "Vloga knjižničarjev pri odpiranju raziskovalnih podatkov in vodenju bibliografij raziskovalcev" na Fakulteti za družbene vede v Ljubljani.
Več: http://www.adp.fdv.uni-lj.si/adp_delavnica_maj2014/
Video: http://videolectures.net/adpdelavnica2014_stebe_bezjak_politike_odprtega/
The presentation of different Slovenian Labour Force Survey microdata, accompanying metadata and materials, and modes of access. From the Fourth DwB Training Course in Ljubljana.
V prvem delu predstavitve so se udeleženci seznanili z zahtevami po odprtem dostopu, ki jih postavlja nov okvirni program za raziskovanje in inovacije Obzorje 2020. V nadaljevanju so bile predstavljene zahteve, ki jih je potrebno izpolniti za predajo podatkov v ADP. V zadnjem delu so se udeleženci seznanili z orodjem, ki je lahko v pomoč pri pripravi načrta za ravnanje z raziskovalnimi podatki (data management planning).
Video: http://videolectures.net/adpseminar2014_bezjak_brvar_prakticnidel/
Pričujoč dokument je nastal za potrebe izvedbe delavnice "Uporaba mikropodatkov Ankete o delovni sili v študijske namene", ki je nastala v povezovanju Arhiva družboslovnih podatkov in Statističnega urada Republike Slovenije.
Več na: http://www.adp.fdv.uni-lj.si/adp_delavnica_feb2014/index.html
Predstavitev je bila namenjena raziskovalcem, ustvarjalcem raziskovalnih podatkov, ki bodo v prihodnje za svoje podatke morali ali želeli zagotoviti odprti dostop. Seznanili so se s postopkom oddaje gradiv v Arhiv družboslovnih podatkov, z obrazci in orodji, ki so na voljo za podporo pri oddaji gradiv ter s prednostmi, ki izvirajo iz dejstva, da so podatki spravljeni v ADP. Dogodek je potekal v knjižnici ZRS UP v Kopru.
Prezentacija je bila narejena za predavanje pri predmetu Trg delovne sile in zaposlovanje na Fakulteti za družbene vede. V sodelovanju med Arhivom družboslovnih podatkov in Statističnim uradom Republike Slovenije je bila pripravljena anonimizirana mikropodatkovna datoteka Ankete o delovni sili 2010, in sicer z namenom širše distribucije mikropodatkov manj zahtevnim uporabnikom, med katere prištevamo tudi dodiplomske in podiplomske študente, ki si želijo enostavnejšega dostopa do mikropodatkov uradne statistike.
Predavanje za študente Filozofske fakultete UM in dijake Prve gimnazije Maribor. Na predavanju so se poslušalci seznanili s poslanstvom ADP, življenjskim krogom podatkov in Načrtom ravnanja z raziskovalnimi podatki ter konceptom sekundarne analize in metapodatkov. Pokazali smo, kako preko spletne strani ADP priti do kakovostnih raziskovalnih podatkov v domačih in mednarodnih podatkovnih katalogih. V drugem delu pa so se seznanili z uporabo pregledovalnika Nesstar, ki med drugim omogoča iskanje, pregledovanje in analiziranje podatkov neposredno na spletu.
This document discusses the history and current state of the Social Science Data Archive (ADP) in Slovenia. It describes how ADP was established in 1997 to archive social science research data and has since expanded to include 5 employees. Currently, ADP's holding includes over 500 datasets that are accessed by around 600 users annually. ADP works to promote open access to research data and provides support to data producers and users. It also collaborates with other related data services and participates in projects to further develop Slovenia's research data infrastructure.
This document provides a summary of key concepts and example problems to help students prepare for their undergraduate statistics final exam. It covers topics like levels of measurement, types of sampling, descriptive statistics, populations and samples, qualitative vs. quantitative data, pivot tables, normal distributions, Poisson distributions, and confidence intervals. The examples are worked out step-by-step to demonstrate the calculations and show the reasoning behind each answer. The goal is to help refresh students' memories on what they learned and to feel more prepared for their upcoming final.
The document provides directions for a 9th grade computer applications activity where students will research a topic of their choice and create a graph about that topic in Microsoft Excel. Students will choose a topic, get it approved, research the topic to find multiple data points, and create a graph in Excel with a title, legend, and at least 5 data points. The activity aims to teach students graphing skills in Excel while allowing them to explore their interests. Potential drawbacks include students wasting time or having difficulty finding information, but the activity is intended to be creative and engaging for students.
This document provides an introduction to descriptive statistics. It discusses organizing and presenting both qualitative and quantitative data. For qualitative data, it describes frequency distribution tables, relative frequencies, percentages, and graphs like bar charts and pie charts. For quantitative data, it covers stem-and-leaf displays, frequency distributions, class widths and midpoints, relative frequencies and percentages. It also discusses histograms for presenting grouped quantitative data. Examples are provided to illustrate these concepts and techniques.
The document provides examples and exercises on statistics concepts like mean, median, mode, range, class intervals, frequency distributions, and pictographs. It contains 10 questions with multiple parts testing understanding of these concepts through calculations and interpreting data presented in tables and diagrams.
redavanje na dogodku "Odprta obzorja – odprti dostop v znanosti", ŠOU v Ljubljani.
V uvodnem delu smo predstavili ADP, ter kam se ta umešča v podpornem ekosistemu za odprti dostop. V nadaljevanju smo izpostavili pomen odprtega dostopa za Univerzo v Ljubljani in še posebej pomen odprtega dostopa do raziskovalnih podatkov. Z vidika povpraševanja smo izpostavili potencial, ki ga lahko predstavlja uvajanje dela na dostopnih raziskovalnih podatkih v okviru študija. Podatki lahko postanejo pomemben element v naprednih Odprtih izobraževalnih gradivih (OER). Omenili smo tudi širšo sliko pomena odprtega dostopa do podatkov na strani ponudbe: vloga pri zagotavljanju integritete raziskovanja in dviga kakovosti raziskovanja, ki oboje lahko s svojim povratnim vplivom prispeva tudi k dvigu kakovosti izobraževanja na Univerzi.
The document provides information about measures of central tendency and dispersion in statistics. It discusses finding the mode, median, and mean of ungrouped and grouped data. It also discusses determining the range and interquartile range of ungrouped and grouped data. Formulas are provided for calculating the mean, median, mode, range, interquartile range, and variance of data sets. Examples are worked through to demonstrate calculating these statistical measures from raw data sets and frequency distribution tables.
Step-1 Tableau Introduction
Step-2 Connecting to Data
Step-3 Building basic views
Step-4 Data manipulations and Calculated fields
Step-5 Tableau Dashboards
Step-6 Advanced Data Options
Step-7 Advanced graph Options
CPSC 120
Spring 2014
Lab 5
Name _____________________________
Practice Objectives of this Lab:
1. Relational Operators
2. The if Statement
3. The if/else Statement
4. The if/else if Statement
5. Menu-Driven Programs
6. Nested if Statements
7. Logical Operators
Grading:
1. 5.1-5.4 15 points each,
5.5-5.6 20 points each
100 points totally
2. Your final complete solution report is due before your lab next week.
To begin
· Log on to your system and create a folder named Lab5 in your work space.
· Start the C++ IDE (Visual Studio) and create a project named Lab5.
Part I (60 points)
LAB 5.1 (15 points) – Using Boolean Variables and Branching Logic
Step 1: Add the tryIt5B.cpp program in your Lab5 folder to the project. Here is a copy of the int main() source code.
1 // Lab 5 tryIt5B
8 int main()
9 {
10 bool hungry = true,
11 sleepy = false,
12 happy = true,
13 lazy = false;
14
15 cout << hungry << " " << sleepy
<< endl;
16
17 if (hungry == true)
18 cout << "I'm hungry. \n";
19
20 if (sleepy == true)
21 cout << "I'm sleepy. \n";
22
23 if (hungry)
24 cout << "I'm still hungry. \n";
25 else
26 cout << "I'm not hungry. \n";
27
28 if (sleepy)
29 cout << "I'm still sleepy. \n";
30 else
31 cout << "I'm not sleepy. \n";
32
33 if (sleepy)
34 cout << "I'm sleepy. \n";
35 else if (lazy)
36 cout << "I'm lazy. \n";
37 else if (happy)
38 cout << "I'm happy. \n";
39 else if (hungry)
40 cout << "I'm hungry. \n";
41
42 return 0;
43 }
Expected Output
_________________
_________________
_________________
_________________
_________________
_________________
_________________
_________________
_________________
_________________
_________________
Observed Output
_________________
_________________
_________________
_________________
_________________
_________________
_________________
_________________
_________________
_________________
_________________
Step 2: Read the source code, paying special attention to the expressions that control the branching statements. Then complete the “Expected Output” column above, writing down what output you think each cout statement will produce. If no output will be produced, leave the line blank.
Step 3: Now compile and run the tryIt5B.cpp program, and look at the output it creates. If the actual output from a cout statement matches what you wrote down, just place a checkmark in the “Observed Output” column. If it is not the same, write down the actual output.
LAB 5.2 (15 points) – Working with the if and if/else Statements
Step 1: Remove tryIt5B.cpp from the project and add the testNum.cpp program in your Lab5 folder to the project. Here is a copy of the source code.
1 // Lab 4 testNum.cpp
2 // This program checks to see if a test score is equal to 100.
3 // It currently contains a l.
Computer Assignment 7
Case on Central Limit Theorem
Study the case in the file: INSTRUCTIONS FOR COMPUTER ASSIGNMENT 7
Follow the steps in this case and do the assignment at the end of this case. The
assignment requires you do perform experiments using EXCEL and verify the
Central Limit Theorem.
Assignment:
(1) Repeat the steps (i) to (iv) to generate random numbers from a Uniform Distribution
and create histograms for the sample sizes n=1, n=5, n=10, n=20, and n=30.
(2) Repeat the steps (i) to (iv) but this time generate your random numbers using an
Exponential Distribution. If exponential distribution is not available use another
distribution such as Binomial or Poisson distribution. Create histogram for n=1,
n=5, n=10, n=20, and n=30.
(3) Repeat the steps (i) to (iv) above but this time generate your random numbers using
a normal distribution. Create histogram for n=1, n=5, n=10, n=20, and n=30.
Present all histograms you created in (1), (2), and (3). Identify the distributions you used
to generate your random numbers and the sample size for each histogram created.
Write a brief report including all the histograms you created. Explain how this
experiment helped you understand the Central Limit Theorem. Discuss your
understanding and the importance of the central limit theorem in statistics and data
analysis.
Note: do not print and display the random numbers, only show your graphs and
indicate the distribution you used to create these graphs.
<<
/ASCII85EncodePages false
/AllowTransparency false
/AutoPositionEPSFiles true
/AutoRotatePages /None
/Binding /Left
/CalGrayProfile (Dot Gain 20%)
/CalRGBProfile (sRGB IEC61966-2.1)
/CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2)
/sRGBProfile (sRGB IEC61966-2.1)
/CannotEmbedFontPolicy /Error
/CompatibilityLevel 1.4
/CompressObjects /Tags
/CompressPages true
/ConvertImagesToIndexed true
/PassThroughJPEGImages true
/CreateJobTicket false
/DefaultRenderingIntent /Default
/DetectBlends true
/DetectCurves 0.0000
/ColorConversionStrategy /CMYK
/DoThumbnails false
/EmbedAllFonts true
/EmbedOpenType false
/ParseICCProfilesInComments true
/EmbedJobOptions true
/DSCReportingLevel 0
/EmitDSCWarnings false
/EndPage -1
/ImageMemory 1048576
/LockDistillerParams false
/MaxSubsetPct 100
/Optimize true
/OPM 1
/ParseDSCComments true
/ParseDSCCommentsForDocInfo true
/PreserveCopyPage true
/PreserveDICMYKValues true
/PreserveEPSInfo true
/PreserveFlatness true
/PreserveHalftoneInfo false
/PreserveOPIComments true
/PreserveOverprintSettings true
/StartPage 1
/SubsetFonts true
/TransferFunctionInfo /Apply
/UCRandBGInfo /Preserve
/UsePrologue false
/ColorSettingsFile ()
/AlwaysEmbed [ true
]
/NeverEmbed [ true
]
/AntiAliasColorImages false
/CropColorImages true
/ColorImageMinResolution 300
/ColorImageMinResolutionPolicy /OK
/DownsampleColorIm ...
This document provides an overview of key concepts in probability and statistics including:
1. Definitions of experimental units, variables, samples, populations, and types of data.
2. Methods for graphing univariate data distributions including bar charts, pie charts, histograms and more.
3. Techniques for interpreting graphs and describing data distributions based on their shape, proportion of measurements in intervals, and presence of outliers.
The document provides practices for coding PL/SQL that are worth considering. It discusses 11 practices:
1) Using UNION instead of mixing SELECT MIN and MAX to get faster performance.
2) COUNT(*), COUNT(1) or COUNT(PK) have same performance.
3) Whether to use NOT IN or MINUS depends on table sizes - MINUS is generally faster but NOT IN may be faster for larger tables.
4) Some hints like parallel are ignored or incompatible with others like index.
5) Nested loops can sometimes be improved by rewriting the query.
6) Full table scans with parallel hint can utilize multiple CPUs.
7) Rewriting NOT IN
Predavanje za študente Filozofske fakultete Univerze v Ljubljani. Seznanili so se s konceptom sekundarne analize in metapodatkov. Pokazali smmo, kako preko spletne strani ADP priti do kakovostnih raziskovalnih podatkov v domačih in mednarodnih podatkovnih katalogih. Obenem so se seznanili tudi z uporabo pregledovalnika Nesstar, ki med drugim omogoča iskanje, pregledovanje in analiziranje podatkov neposredno na spletu. V drugem delu so študentje na podlagi podanih navodil izvedli samostojno vajo, v kateri so se seznanili z opisom raziskave, citiranjem in osnovnimi statističnimi analizami.
Dodatna gradiva: http://www.adp.fdv.uni-lj.si/adp_izobrazevanje_avg2014/presentations/Vodic%20po%20orodjih%20ADP.pdf
V prvem delu se udeleženci na kratko seznanijo z zgodovino "osveščenega pristanka" ter s pomenom dokumenta za načrtovanje raziskovanja ter za nadaljnje ravnanje z raziskovalnimi podatki. V nadaljevanju se seznanijo s pojmom občutljivi podatki ter z vprašanji, ki pomenijo ustrezno in kakovostno ravnanje z občutljivimi podatki.
Objava po dogodku: http://www.adp.fdv.uni-lj.si/blog/2014/blog/seminar-za-raziskovalce-priprava-raziskovalnih-podatkov-za-odprti-dostop/#axzz43uNG9ipW
Video: http://videolectures.net/adpseminar2014_brvar_prakticnidel/
Predavanje na dogodku "Vloga knjižničarjev pri odpiranju raziskovalnih podatkov in vodenju bibliografij raziskovalcev" na Fakulteti za družbene vede v Ljubljani.
Več: http://www.adp.fdv.uni-lj.si/adp_delavnica_maj2014/
Video: http://videolectures.net/adpdelavnica2014_stebe_bezjak_politike_odprtega/
The presentation of different Slovenian Labour Force Survey microdata, accompanying metadata and materials, and modes of access. From the Fourth DwB Training Course in Ljubljana.
V prvem delu predstavitve so se udeleženci seznanili z zahtevami po odprtem dostopu, ki jih postavlja nov okvirni program za raziskovanje in inovacije Obzorje 2020. V nadaljevanju so bile predstavljene zahteve, ki jih je potrebno izpolniti za predajo podatkov v ADP. V zadnjem delu so se udeleženci seznanili z orodjem, ki je lahko v pomoč pri pripravi načrta za ravnanje z raziskovalnimi podatki (data management planning).
Video: http://videolectures.net/adpseminar2014_bezjak_brvar_prakticnidel/
Pričujoč dokument je nastal za potrebe izvedbe delavnice "Uporaba mikropodatkov Ankete o delovni sili v študijske namene", ki je nastala v povezovanju Arhiva družboslovnih podatkov in Statističnega urada Republike Slovenije.
Več na: http://www.adp.fdv.uni-lj.si/adp_delavnica_feb2014/index.html
Predstavitev je bila namenjena raziskovalcem, ustvarjalcem raziskovalnih podatkov, ki bodo v prihodnje za svoje podatke morali ali želeli zagotoviti odprti dostop. Seznanili so se s postopkom oddaje gradiv v Arhiv družboslovnih podatkov, z obrazci in orodji, ki so na voljo za podporo pri oddaji gradiv ter s prednostmi, ki izvirajo iz dejstva, da so podatki spravljeni v ADP. Dogodek je potekal v knjižnici ZRS UP v Kopru.
Prezentacija je bila narejena za predavanje pri predmetu Trg delovne sile in zaposlovanje na Fakulteti za družbene vede. V sodelovanju med Arhivom družboslovnih podatkov in Statističnim uradom Republike Slovenije je bila pripravljena anonimizirana mikropodatkovna datoteka Ankete o delovni sili 2010, in sicer z namenom širše distribucije mikropodatkov manj zahtevnim uporabnikom, med katere prištevamo tudi dodiplomske in podiplomske študente, ki si želijo enostavnejšega dostopa do mikropodatkov uradne statistike.
Predavanje za študente Filozofske fakultete UM in dijake Prve gimnazije Maribor. Na predavanju so se poslušalci seznanili s poslanstvom ADP, življenjskim krogom podatkov in Načrtom ravnanja z raziskovalnimi podatki ter konceptom sekundarne analize in metapodatkov. Pokazali smo, kako preko spletne strani ADP priti do kakovostnih raziskovalnih podatkov v domačih in mednarodnih podatkovnih katalogih. V drugem delu pa so se seznanili z uporabo pregledovalnika Nesstar, ki med drugim omogoča iskanje, pregledovanje in analiziranje podatkov neposredno na spletu.
This document discusses the history and current state of the Social Science Data Archive (ADP) in Slovenia. It describes how ADP was established in 1997 to archive social science research data and has since expanded to include 5 employees. Currently, ADP's holding includes over 500 datasets that are accessed by around 600 users annually. ADP works to promote open access to research data and provides support to data producers and users. It also collaborates with other related data services and participates in projects to further develop Slovenia's research data infrastructure.
This document provides a summary of key concepts and example problems to help students prepare for their undergraduate statistics final exam. It covers topics like levels of measurement, types of sampling, descriptive statistics, populations and samples, qualitative vs. quantitative data, pivot tables, normal distributions, Poisson distributions, and confidence intervals. The examples are worked out step-by-step to demonstrate the calculations and show the reasoning behind each answer. The goal is to help refresh students' memories on what they learned and to feel more prepared for their upcoming final.
The document provides directions for a 9th grade computer applications activity where students will research a topic of their choice and create a graph about that topic in Microsoft Excel. Students will choose a topic, get it approved, research the topic to find multiple data points, and create a graph in Excel with a title, legend, and at least 5 data points. The activity aims to teach students graphing skills in Excel while allowing them to explore their interests. Potential drawbacks include students wasting time or having difficulty finding information, but the activity is intended to be creative and engaging for students.
This document provides an introduction to descriptive statistics. It discusses organizing and presenting both qualitative and quantitative data. For qualitative data, it describes frequency distribution tables, relative frequencies, percentages, and graphs like bar charts and pie charts. For quantitative data, it covers stem-and-leaf displays, frequency distributions, class widths and midpoints, relative frequencies and percentages. It also discusses histograms for presenting grouped quantitative data. Examples are provided to illustrate these concepts and techniques.
The document provides examples and exercises on statistics concepts like mean, median, mode, range, class intervals, frequency distributions, and pictographs. It contains 10 questions with multiple parts testing understanding of these concepts through calculations and interpreting data presented in tables and diagrams.
redavanje na dogodku "Odprta obzorja – odprti dostop v znanosti", ŠOU v Ljubljani.
V uvodnem delu smo predstavili ADP, ter kam se ta umešča v podpornem ekosistemu za odprti dostop. V nadaljevanju smo izpostavili pomen odprtega dostopa za Univerzo v Ljubljani in še posebej pomen odprtega dostopa do raziskovalnih podatkov. Z vidika povpraševanja smo izpostavili potencial, ki ga lahko predstavlja uvajanje dela na dostopnih raziskovalnih podatkih v okviru študija. Podatki lahko postanejo pomemben element v naprednih Odprtih izobraževalnih gradivih (OER). Omenili smo tudi širšo sliko pomena odprtega dostopa do podatkov na strani ponudbe: vloga pri zagotavljanju integritete raziskovanja in dviga kakovosti raziskovanja, ki oboje lahko s svojim povratnim vplivom prispeva tudi k dvigu kakovosti izobraževanja na Univerzi.
The document provides information about measures of central tendency and dispersion in statistics. It discusses finding the mode, median, and mean of ungrouped and grouped data. It also discusses determining the range and interquartile range of ungrouped and grouped data. Formulas are provided for calculating the mean, median, mode, range, interquartile range, and variance of data sets. Examples are worked through to demonstrate calculating these statistical measures from raw data sets and frequency distribution tables.
Step-1 Tableau Introduction
Step-2 Connecting to Data
Step-3 Building basic views
Step-4 Data manipulations and Calculated fields
Step-5 Tableau Dashboards
Step-6 Advanced Data Options
Step-7 Advanced graph Options
CPSC 120
Spring 2014
Lab 5
Name _____________________________
Practice Objectives of this Lab:
1. Relational Operators
2. The if Statement
3. The if/else Statement
4. The if/else if Statement
5. Menu-Driven Programs
6. Nested if Statements
7. Logical Operators
Grading:
1. 5.1-5.4 15 points each,
5.5-5.6 20 points each
100 points totally
2. Your final complete solution report is due before your lab next week.
To begin
· Log on to your system and create a folder named Lab5 in your work space.
· Start the C++ IDE (Visual Studio) and create a project named Lab5.
Part I (60 points)
LAB 5.1 (15 points) – Using Boolean Variables and Branching Logic
Step 1: Add the tryIt5B.cpp program in your Lab5 folder to the project. Here is a copy of the int main() source code.
1 // Lab 5 tryIt5B
8 int main()
9 {
10 bool hungry = true,
11 sleepy = false,
12 happy = true,
13 lazy = false;
14
15 cout << hungry << " " << sleepy
<< endl;
16
17 if (hungry == true)
18 cout << "I'm hungry. \n";
19
20 if (sleepy == true)
21 cout << "I'm sleepy. \n";
22
23 if (hungry)
24 cout << "I'm still hungry. \n";
25 else
26 cout << "I'm not hungry. \n";
27
28 if (sleepy)
29 cout << "I'm still sleepy. \n";
30 else
31 cout << "I'm not sleepy. \n";
32
33 if (sleepy)
34 cout << "I'm sleepy. \n";
35 else if (lazy)
36 cout << "I'm lazy. \n";
37 else if (happy)
38 cout << "I'm happy. \n";
39 else if (hungry)
40 cout << "I'm hungry. \n";
41
42 return 0;
43 }
Expected Output
_________________
_________________
_________________
_________________
_________________
_________________
_________________
_________________
_________________
_________________
_________________
Observed Output
_________________
_________________
_________________
_________________
_________________
_________________
_________________
_________________
_________________
_________________
_________________
Step 2: Read the source code, paying special attention to the expressions that control the branching statements. Then complete the “Expected Output” column above, writing down what output you think each cout statement will produce. If no output will be produced, leave the line blank.
Step 3: Now compile and run the tryIt5B.cpp program, and look at the output it creates. If the actual output from a cout statement matches what you wrote down, just place a checkmark in the “Observed Output” column. If it is not the same, write down the actual output.
LAB 5.2 (15 points) – Working with the if and if/else Statements
Step 1: Remove tryIt5B.cpp from the project and add the testNum.cpp program in your Lab5 folder to the project. Here is a copy of the source code.
1 // Lab 4 testNum.cpp
2 // This program checks to see if a test score is equal to 100.
3 // It currently contains a l.
Computer Assignment 7
Case on Central Limit Theorem
Study the case in the file: INSTRUCTIONS FOR COMPUTER ASSIGNMENT 7
Follow the steps in this case and do the assignment at the end of this case. The
assignment requires you do perform experiments using EXCEL and verify the
Central Limit Theorem.
Assignment:
(1) Repeat the steps (i) to (iv) to generate random numbers from a Uniform Distribution
and create histograms for the sample sizes n=1, n=5, n=10, n=20, and n=30.
(2) Repeat the steps (i) to (iv) but this time generate your random numbers using an
Exponential Distribution. If exponential distribution is not available use another
distribution such as Binomial or Poisson distribution. Create histogram for n=1,
n=5, n=10, n=20, and n=30.
(3) Repeat the steps (i) to (iv) above but this time generate your random numbers using
a normal distribution. Create histogram for n=1, n=5, n=10, n=20, and n=30.
Present all histograms you created in (1), (2), and (3). Identify the distributions you used
to generate your random numbers and the sample size for each histogram created.
Write a brief report including all the histograms you created. Explain how this
experiment helped you understand the Central Limit Theorem. Discuss your
understanding and the importance of the central limit theorem in statistics and data
analysis.
Note: do not print and display the random numbers, only show your graphs and
indicate the distribution you used to create these graphs.
<<
/ASCII85EncodePages false
/AllowTransparency false
/AutoPositionEPSFiles true
/AutoRotatePages /None
/Binding /Left
/CalGrayProfile (Dot Gain 20%)
/CalRGBProfile (sRGB IEC61966-2.1)
/CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2)
/sRGBProfile (sRGB IEC61966-2.1)
/CannotEmbedFontPolicy /Error
/CompatibilityLevel 1.4
/CompressObjects /Tags
/CompressPages true
/ConvertImagesToIndexed true
/PassThroughJPEGImages true
/CreateJobTicket false
/DefaultRenderingIntent /Default
/DetectBlends true
/DetectCurves 0.0000
/ColorConversionStrategy /CMYK
/DoThumbnails false
/EmbedAllFonts true
/EmbedOpenType false
/ParseICCProfilesInComments true
/EmbedJobOptions true
/DSCReportingLevel 0
/EmitDSCWarnings false
/EndPage -1
/ImageMemory 1048576
/LockDistillerParams false
/MaxSubsetPct 100
/Optimize true
/OPM 1
/ParseDSCComments true
/ParseDSCCommentsForDocInfo true
/PreserveCopyPage true
/PreserveDICMYKValues true
/PreserveEPSInfo true
/PreserveFlatness true
/PreserveHalftoneInfo false
/PreserveOPIComments true
/PreserveOverprintSettings true
/StartPage 1
/SubsetFonts true
/TransferFunctionInfo /Apply
/UCRandBGInfo /Preserve
/UsePrologue false
/ColorSettingsFile ()
/AlwaysEmbed [ true
]
/NeverEmbed [ true
]
/AntiAliasColorImages false
/CropColorImages true
/ColorImageMinResolution 300
/ColorImageMinResolutionPolicy /OK
/DownsampleColorIm ...
This document provides an overview of key concepts in probability and statistics including:
1. Definitions of experimental units, variables, samples, populations, and types of data.
2. Methods for graphing univariate data distributions including bar charts, pie charts, histograms and more.
3. Techniques for interpreting graphs and describing data distributions based on their shape, proportion of measurements in intervals, and presence of outliers.
The document provides practices for coding PL/SQL that are worth considering. It discusses 11 practices:
1) Using UNION instead of mixing SELECT MIN and MAX to get faster performance.
2) COUNT(*), COUNT(1) or COUNT(PK) have same performance.
3) Whether to use NOT IN or MINUS depends on table sizes - MINUS is generally faster but NOT IN may be faster for larger tables.
4) Some hints like parallel are ignored or incompatible with others like index.
5) Nested loops can sometimes be improved by rewriting the query.
6) Full table scans with parallel hint can utilize multiple CPUs.
7) Rewriting NOT IN
This document provides instructions for calculating key Project Management metrics like the Program Evaluation and Review Technique (PERT) using both manual calculations and Microsoft Excel. It outlines 5 steps to perform PERT calculations manually: 1) define tasks, 2) organize tasks in logical order, 3) generate estimates, 4) determine earliest and latest dates, 5) determine probability of meeting dates. It then demonstrates how to set up a spreadsheet to automate the calculations and determine completion probabilities for different dates. Key lessons include that all plans are estimates and scope changes require updated estimates.
This document discusses using the CALL SYMPUT routine to transfer information between DATA step program steps. It provides three examples: 1) creating dummy variables for all possible values of a variable, 2) generating labels for variables using existing formats, and 3) using the BYTE function to assign alphabetically ordered names to datasets created from raw data files. CALL SYMPUT assigns values produced in a DATA step to macro variables, allowing dynamic communication between SAS language and macros.
Random Forest and Generalized Boosted Model classification models were used to predict if participants correctly or incorrectly performed a bicep curl exercise based on accelerometer data from wearable devices. Random Forest achieved 98.49% average accuracy on the training data and 100% accuracy on the test data. Generalized Boosted Model achieved 92.59% average accuracy on the training data. Both models produced promising results for classifying the exercise performances.
This document summarizes the analysis of data from a pharmaceutical company to model and predict the output variable (titer) from input variables in a biochemical drug production process. Several statistical models were evaluated including linear regression, random forest, and MARS. The analysis involved developing blackbox models using only controlled input variables, snapshot models using all input variables at each time point, and history models incorporating changes in input variables over time to predict titer values. Model performance was compared using cross-validation.
Experiments
A Quick History of Design of Experiments
Why We Use Experimental Designs
What is Design of Experiment
How Design of Experiment contributes
Terminology
Analysis Of Variation (ANOVA)
Basic Principle of Design of Experiments
Some Experimental Designs
This document provides an overview and instructions for using the InnerSoft STATS software to analyze data. It describes 15 different analysis procedures that can be accessed from the software's Analyze Menu, including frequency tables, descriptive statistics, crosstabs, hypothesis tests, ANOVA, correlation, regression, and time series analysis. For each analysis procedure, it provides a brief overview and descriptions of the input options and statistical tests that can be selected. The document is intended to help users understand what types of analyses can be performed and how to set up and interpret the results.
This document provides a tutorial on conducting and interpreting a multiple linear regression analysis in SPSS. It contains two sections - the first outlines the steps to specify a regression analysis in SPSS using sample data. The second section interprets example SPSS output, including descriptive statistics, bivariate correlations, model summary, ANOVA table, and coefficients output. It also provides a guide for writing up the results in APA style.
This document provides instructions for 6 exercises in an introduction to arrays homework assignment. The exercises involve: 1) analyzing input numbers stored in an array, 2) converting dates to English from a formatted string using arrays of months and number of days, 3) finding the average temperature and months above average using parallel arrays, 4) implementing the Sieve of Eratosthenes algorithm to find prime numbers using an array, 5) grading multiple choice exams by comparing answer arrays, and 6) displaying student grades by reading data from a file into parallel arrays.
This homework assignment involves completing conceptual questions about statistics, sampling, and probability. It also involves analyzing real data sets in SPSS and interpreting the results. Students are asked to enter data, run analyses including frequencies, descriptive statistics, and graphs. They must interpret the central tendency, dispersion, distribution and outliers of the data. The assignment assesses students' understanding of key statistical concepts and their ability to apply statistical procedures in SPSS and draw conclusions from the results.
This study introduces and compares different methods for estimating the two parameters of generalized logarithmic series distribution. These methods are the cuckoo search optimization, maximum likelihood estimation, and method of moments algorithms. All the required derivations and basic steps of each algorithm are explained. The applications for these algorithms are implemented through simulations using different sample sizes (n = 15, 25, 50, 100). Results are compared using the statistical measure mean square error.
USING CUCKOO ALGORITHM FOR ESTIMATING TWO GLSD PARAMETERS AND COMPARING IT WI...ijcsit
This study introduces and compares different methods for estimating the two parameters of generalized logarithmic series distribution. These methods are the cuckoo search optimization, maximum likelihood estimation, and method of moments algorithms. All the required derivations and basic steps of each algorithm are explained. The applications for these algorithms are implemented through simulations using different sample sizes (n = 15, 25, 50, 100). Results are compared using the statistical measure mean square error.
This document compares different estimation methods for parameters of the generalized logarithmic series distribution (GLSD), including cuckoo search optimization (CSO), maximum likelihood estimation (MLE), and method of moments (MOM). The CSO algorithm is introduced and applied to estimate the two GLSD parameters. Simulation results using different sample sizes show that CSO performs best for small sample sizes while MLE is best for large sample sizes, based on mean square error. The document concludes that CSO is the best estimator for small sample sizes.
Homework 1
Introduction to Statistics
Be sure you have reviewed this module/week’s lesson and presentations before proceeding to the homework exercises. Number all responses. Review the “Homework Instructions: General” document for an example of how homework assignments must look.
Homework 1 does not include any SPSS output and consists only of Part I.
Similar to Introductory Exercise: Establish extent of precarious employment in EU countries and explore potential for comparative analysis (20)
In this presentation we were talking about preparing documentation and adapting work processes for acquiring DSA. We briefly introduced ADP as the national data repository for social sciences. We explained the brief history of policy development. The changes on national and international level and internal and external changes that caused new challenges. Finnally we explained the process of preparing and developing policy.
Related link: http://www.dans.knaw.nl/nl/actueel/agenda/cessda-expert-seminar-2015
We discussed materials that need to be saved, how to save it and what tools we can use in the process.
Event was one of Foster Cessda training events for doctoral students.
Related link: https://www.fosteropenscience.eu/event/cessda-research-data-management-open-data-doctoral-training-series-research-data-management
https://www.fosteropenscience.eu/project/index.php?option=com_content&view=category&layout=blog&id=23&Itemid=104
Presentations focused on materials and documentation that should be saved in order to prepare data file from a survey for secondary use. Some hints were given on how to label items, code missing values, organize folder structure etc. Additionally to clean dataset, documentation on data level, following internationally accepted DDI specification, could be prepared using Colectica for Excel or Nesstar Publisher.
Event was one of Foster Cessda training events for doctoral students.
Related link: https://www.fosteropenscience.eu/project/index.php?option=com_content&view=category&layout=blog&id=23&Itemid=104
Abstract: https://www.fosteropenscience.eu/event/research-data-management-and-open-data-0
Dilemmas related to sharing research data were presented. We talked about fraud and misuse and examples of retracted journal articles because of proven fraud. Licences for research data were introduced and requests from journals about open access policies. Researchers need to check and verify journal in which they will published. They should use DOAJ for that. Unfortunately there are more and more hijacked journals. When making data available for secondary use researchers should confirm that distribution is in compliance with ethical norms and legal system.
Event was one of Foster Cessda training events for doctoral students.
Related link: https://www.fosteropenscience.eu/project/index.php?option=com_content&view=category&layout=blog&id=23&Itemid=104
Predavanje na dogodku "Informacijska družba 2015, Soočanje z demografskimi izzivi" na Institutu Jožef Štefan.
Popisni mikropodatki so izredno kakovosten vir tudi za sekundarno rabo, pri čemer se jih lahko uporablja tako v raziskovalne, kot tudi izobraževalne namene. Učenje s pomočjo popisnih mikropodatkov ima mnoge prednosti: od izboljševanja statistične in metodološke pismenosti, povezovanja teorije in prakse, pridobivanja analitičnih sposobnosti, pa do razširitev znanj o demografskih problemih, ki so geografsko ali časovno oddaljeni. V drugih državah imajo inštitucije, ki distribuirajo popisne mikropodatke, dobro razvite prakse podpore vključevanja popisnih podatkov v izobraževanju. Zato smo tudi sami na nacionalni ravni razširili aktivnosti promocije rabe popisnih mikropodatkov v izobraževalne namene.
Objava po dogodku: http://is.ijs.si/zborniki/!%20B%20-%20Soocanje%20z%20demografskimi%20izzivi%20-%20ZBORNIK.pdf
Topics covered at the workshop address basic questions related to Research Data Management for open data, which include preparing a Research Data Management (RDM) plan, licensing data and intellectual property, metadata and contextual description (documentation), ethical and legal aspects of sharing sensitive or confidential data, anonymizing research data for reuse, data archiving and long-term preservation, and data security and storage.
Event: http://conferences.nib.si/AS2015/default.htm
Related material: http://conferences.nib.si/AS2015/BookAS15.pdf
This document summarizes a course for doctoral students on research data management and open data. It discusses:
- The complexity and diversity of research methodologies and data types.
- An open data project in Slovenia that aimed to establish national policies through stakeholder interviews and workshops.
- The research and data lifecycles, highlighting key roles and responsibilities at different stages for researchers, institutions, libraries, and funders.
- The role of data services in managing data through the lifecycle, from depositors to curation to access for users.
This presentation contains important aspects related to methodology and procedures of saving data, including data documentation, data and metadata standards and tools to be used for depositing.
Event was one of Foster Cessda trainings for doctoral students.
Videos: http://videolectures.net/adptecaj2015_ljubljana/
In the first part different purposes of depositing data were discussed. Later on the following questions were raised: Where to deposit research data? Why choose ADP for deposition? How to deposit research data? The final point for discussion was research data acquisition stage.
Event was one of Foster Cessda trainings for doctoral students.
Videos: http://videolectures.net/adptecaj2015_ljubljana/
Related links: https://www.fosteropenscience.eu/event/cessda-research-data-management-open-data-doctoral-training-series-research-data-management
https://www.fosteropenscience.eu/project/index.php?option=com_content&view=category&layout=blog&id=23&Itemid=104
Popular portals for Social Scientists were presented, such as: CESSDA, European Social Survey, European Election Database, Atlas of European Values. Special attention was placed to Official Statistics microdata, DWB project and training courses. At the end metadata systems CIMES and MISSY were presented: access to EU official statistics microdata and aggregate data, access to census microdata, access to official statistics microdata in Slovenia /SI-STAT data portal.
Event was one of Foster Cessda trainings for doctoral students.
Videos: http://videolectures.net/adptecaj2015_ljubljana/
Related materials/pages: https://www.fosteropenscience.eu/project/index.php?option=com_content&view=category&layout=blog&id=23&Itemid=104
https://www.fosteropenscience.eu/event/cessda-research-data-management-open-data-doctoral-training-series-research-data-management
Udeleženci seminarja so se spoznali z načinom, kakor ravnati z raziskovalnimi podatki v fazi načrtovanja in ustvarjanja podatkov.
Povezava: http://www.adp.fdv.uni-lj.si/blog/2015/blog/prakticni-vidiki-objavljanja-v-odprtem-dostopu/#axzz43uNG9ipW
Lecture at an event "SEEDS Kick-off meeting", FORS, Lausanne, Switzerland.
Related page: http://www.snf.ch/en/funding/programmes/scopes/Pages/default.aspx
http://seedsproject.ch/?p=1
Lecture at an event "SEEDS Kick-off meeting", FORS, Lausanne, Switzerland.
Related materials: http://www.snf.ch/en/funding/programmes/scopes/Pages/default.aspx
http://seedsproject.ch/?page_id=368
The document discusses training activities provided as part of the SERSCIDA project. It describes a mixture of external training events attended and internal tailor-made training content developed. The internal training included establishing needs, drafting a roadmap for data service establishment, a course on setting up data services from the UK Data Archive, and a training manual. It also lists face-to-face meetings to provide overviews of data service structures. The document ends with discussing designing further training activities based on feedback and assessing needs of new partners.
Predavanje na dogodku "Delavnica Trajno ohranjanje digitalnih vsebin". Udeleženci so se seznanili s pojmom razisovalnimi podatki, klasifikacijo podatkov, podatkovnimi tipi, različnimi vrstami dostopa do podatkov. Predstavljeno jim je bilo poslanstvo ADP in njegova vloga znotraj konzorcija CESSDA. Večji del predavanja je bil namenjen življenjskemu krogu podatkov ter načinu, kako z njimi ravnati, da bi jih lahko kakovostno hranili na dolgi rok.
Predavanje na dogodku "Učinkovito vodenje mednarodnih raziskovalnih projektov s poudarkom na obzorju 2020".
deleženci so se uvodoma seznanili z Arhivom družboslovnih podatkov. V nadaljevanju pa s prednostmi odprtega dostopa ter zahtevami Obzorja 2020, ki za udeležence pilotnega projekta predvideva načrtovano ravnanje z razisokovalnimi podatki. V zadnjem delu so bila predstavljena tudi pričakovanja oz. zahteve ADP, kadar gre za oddajo podatkov s strani dajalca. Ter primeri dobrih praks iz tujine.
Povezano gradivo: file:///C:/Users/adpstud2/Downloads/MD15Program_14.4.2015.pdf
Predavanje na dogodku "Odprti dostop na Univerzi v Mariboru", UKM - Univerzitetna knjižnica Maribor, CTK - Centralna tehniška knjižnica, Maribor.
Udeleženci so se seznanili z odpiranjem raziskovalnih podatkov v slovenskem okolju ter z zahtevami, ki izhajajo iz programa Obzorje2020. V drugem delu predstavitve so bile predstavljene podatkovne storitve in podpora raziskovalnemu okolju, ki izvira iz podatkovnega repozitorija.
Video: http://lsvc1.arnes.si/videos/video/393/in/channel/24/
Lecture at an event "Fourth DwB Training Course: Working with data from official statistics in Europe –particularly the European Union Labour Force Survey (EU-LFS)", Data without Boundaries (DwB), Ljubljana, Slovenia.
Related materials: http://www.dwbproject.org/events/tc4.html
Predavanje je prikaz življenjskega kroga raziskovanja z metodo sekundarne analize. Iulustrirano je s primeri raziskav, ki v celoti ali deloma segajo na področje množičnih medijev. Slušatelji se seznanijo s splošnimi vprašanji: kako zastaviti raziskovalni problem, kje in kako najti kakovostne in ustrezne raziskovalne podatke ter kako oblikovati raziskovalno poročilo.
Povezano gradivo: http://www.zrss.si/dokumenti/zajavnost/Izobrazevanje-srednjesolskih-uciteljev-21-22avgust-2014_ver3.pdf
http://www.adp.fdv.uni-lj.si/blog/2014/blog/druzboslovni-podatkovni-viri-za-delo-s-srednjesolci/#axzz43uNG9ipW
Predavanje na dogodku "Dostop in uporaba razpoložljivih družboslovnih podatkov v srednjih šolah", Zavod RS za šolstvo, Ljubljana.
Na predavanju se se slušatelji seznanili z možnostmi dostopa do družboslovnih podatkovnih virov v domačih in tujih repozitorijih. Podrobneje so se seznanili z orodjem Nesstar ter s katalogom raziskav, dostopnih preko Arhiva družboslovnih podatkov.
Povezano gradivo: http://www.zrss.si/dokumenti/zajavnost/Izobrazevanje-srednjesolskih-uciteljev-21-22avgust-2014_ver3.pdf
http://www.adp.fdv.uni-lj.si/blog/2014/blog/druzboslovni-podatkovni-viri-za-delo-s-srednjesolci/#axzz43uNG9ipW
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
🔥🔥🔥🔥🔥🔥🔥🔥🔥
إضغ بين إيديكم من أقوى الملازم التي صممتها
ملزمة تشريح الجهاز الهيكلي (نظري 3)
💀💀💀💀💀💀💀💀💀💀
تتميز هذهِ الملزمة بعِدة مُميزات :
1- مُترجمة ترجمة تُناسب جميع المستويات
2- تحتوي على 78 رسم توضيحي لكل كلمة موجودة بالملزمة (لكل كلمة !!!!)
#فهم_ماكو_درخ
3- دقة الكتابة والصور عالية جداً جداً جداً
4- هُنالك بعض المعلومات تم توضيحها بشكل تفصيلي جداً (تُعتبر لدى الطالب أو الطالبة بإنها معلومات مُبهمة ومع ذلك تم توضيح هذهِ المعلومات المُبهمة بشكل تفصيلي جداً
5- الملزمة تشرح نفسها ب نفسها بس تكلك تعال اقراني
6- تحتوي الملزمة في اول سلايد على خارطة تتضمن جميع تفرُعات معلومات الجهاز الهيكلي المذكورة في هذهِ الملزمة
واخيراً هذهِ الملزمة حلالٌ عليكم وإتمنى منكم إن تدعولي بالخير والصحة والعافية فقط
كل التوفيق زملائي وزميلاتي ، زميلكم محمد الذهبي 💊💊
🔥🔥🔥🔥🔥🔥🔥🔥🔥
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
This presentation was provided by Racquel Jemison, Ph.D., Christina MacLaughlin, Ph.D., and Paulomi Majumder. Ph.D., all of the American Chemical Society, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.ppt
Introductory Exercise: Establish extent of precarious employment in EU countries and explore potential for comparative analysis
1. Leibniz Institute
for the Social
Sciences
Janez Štebe:
Introductory Exercise: Establish extent of precarious employment in EU countries and explore potential
for comparative analysis
Training Course on EU‐LFS, September 17th‐20th 2014, Ljubljana
Content:
I. Explore the data set
II. Prepare the working data set
III. Precarious employment in different countries – separate analysis by countries
IV. Including the macro level variable into explanation – joint analysis
Before you start
Select only working age population (15-74) and respondents living in private households for analysis.
I. Explore the data set
Start the analysis by checking the structure of the data file. Does it contain the expected variables? Do they
contain the definitions of the missing values? What is the order of the variables in a data set? What are the
units of the analysis?
(Display, frequencies, codebook routines in SPSS)
2. II. Prepare the working data set
a) Select the relevant population you need to work with. In order to analyse the forms of precarious
employment we will limit to currently employed population.
Solution (Tip: Use Format Painter to make the solution visible):
freq WSTATOR .
* You can either type the command into syntax or use the menu Data-->Select... and then paste and execute it.
DATASET COPY working_pop.
DATASET ACTIVATE working_pop.
FILTER OFF.
USE ALL.
select if (WSTATOR = 1 or WSTATOR=2) .
EXECUTE.
*Check the result.
freq WSTATOR .
freq country.
End of Solution
b) While comparing countries you may wish to obtain equal sample size of the selected population by
countries.
Question: Do the data set contains weights of some kind? Which shall we use?
Solution:
See explanation in EUROSTAT (2013): Quality report of the European Union Labour Force Survey 2012 - 2014
edition
http://epp.eurostat.ec.europa.eu/portal/page/portal/product_details/publication?p_product_code=KS-TC-14-
001
Weights usually express the inverse probability of selection. You can multiply different (independent)
type of weights if exists. Since no weight variable is present in a training data set, we will create a
constant one to begin with.
End of Solution
3. Task
Prepare the weighting variable and activate it to obtain in each country equal sample size.
Solution:
compute COEFF=1.
* make the weight active.
WEIGHT by coeff.
*Check the result. With the COEFF=1 nothing should happen.
freq country.
* obtain the values for the new weight coeff that will adjust sample size.
* for safety reasons, some commands require file to be sorted on key variables, therefore we will sort by
country.
SORT CASES BY COUNTRY.
AGGREGATE
/OUTFILE=* MODE=ADDVARIABLES
/PRESORTED
/BREAK=COUNTRY
/N_BREAK=N.
AGGREGATE
/OUTFILE=* MODE=ADDVARIABLES
/PRESORTED
/N_tot=N.
freq N_tot.
cross N_BREAK by country.
*produce weight coeff in order to have equal sample size: N_tot/ number of countries .
compute coeff= coeff*((N_tot/21)/N_BREAK).
*Check the result. all sample size have to be equal. .
freq country.
End of Solution
4. III. Precarious employment in different countries
a) Identify the variables that could be used for analysis of precarious employment
Candidates are STARTIME temp ftpt .
b) prepare some variables for further descriptive analysis
Task
Start with some basic descriptive analysis. Compare countries to establish differences. Include also some
bivariate analysis comparing the subpopulations in countries in order to see, if correlations among variables
shows any differences depending of institutional context.
In our example we choose the STARTIME as the dependent variable and temp, ftpt, sex and age as
independent. At the end of the exercise we will pursue the linear regression analysis. For that purpose we will
prepare in advance and create dummy variables where apply.
Solution:
freq STARTIME temp ftpt .
* create dummy variables.
RECODE TEMP (2=1) (else=0) INTO temp_lm.
recode ftpt (1=1) (2=0) (MISSING=SYSMIS) INTO FT.
VARIABLE LABELS temp_lm 'Permanent_dummy'.
VARIABLE LABELS FT 'Full_time_dummy'.
EXECUTE.
*check.
cross temp by temp_lm.
cross ftpt by ft.
* prepare age for descriptive analysis.
RECODE age (17 22=20) (27 =27) (32 37=35) (42 THRU 52= 47) (57 thru 72=65) INTO age5.
var lab age5 'Lifecycle - 5 groups seniority levels (recode age)'.
val lab age5 20 'up to 22 years old' 27 'up to 29 years old' 35 'up to 40 years old' 47 'up to 54 years old' 65 'up
to 72 years old' .
format age5 (f2.0).
freq age5.
* create dummy variables.
recode sex (1=1) (else=0) into sex_male .
cross sex by sex_male.
freq sex_male temp ftpt ft .
End of Solution
Ideas for thinking: Why did we handle missing differently whyle recoding ‘TEMP’?
5. c) Use a limited set of countries for country level oriented exploratory analysis.
Task
Select countries that have representatives among workshop participants. It is more practical to do the
exploratory analysis on a limited set of countries. Present some descriptive statistics on a country level and on
the descriptive independent variables by country level.
Note that we will save the current data set with all the countries for further analysis at the end of session.
Solution:
DATASET COPY country_sel.
DATASET ACTIVATE country_sel.
FILTER OFF.
USE ALL.
SELECT IF (COUNTRY= 7 | COUNTRY=15 | COUNTRY=16 | COUNTRY=18 | COUNTRY=23 | COUNTRY=25 |
COUNTRY=27 | COUNTRY=29 | COUNTRY=31).
EXECUTE.
* chec select.
freq country.
*Select countries that have representatives among workshop participants. It is more practical to do the
exploratory analysis on a limited set of countries.
means STARTIME temp_lm ft sex_male by country
/CELLS MEAN MEDIAN COUNT STDDEV
/STATISTICS ANOVA .
*Select countries that have representatives among workshop participants. It is more practical to do the
exploratory analysis on a limited set of countries.
MEANS TABLES=STARTIME by COUNTRY BY age5 sex temp ftpt
/CELLS MEAN MEDIAN COUNT STDDEV.
End of Solution
6. d) Display separate analysis by country
Task
Split file into portions by country and perform some further exploratory analysis. Conclude with the linear
regression analysis of STARTIME including the set of individual level independent variables.
Solution:
SPLIT FILE LAYERED BY COUNTRY.
MEANS TABLES=STARTIME by age5 sex temp ftpt
/CELLS MEAN MEDIAN COUNT STDDEV
/STATISTICS ANOVA .
corr STARTIME with temp_lm ft sex_male .
regression VARIABLES = STARTIME sex_male age temp_lm ft
/DEPENDENT STARTIME
/METHOD enter sex_male age temp_lm ft .
End of Solution
Ideas for thinking: How would you test if one country is statistically different from another?
7. IV. Including the macro level variable into explanation
a) Aggregate the information from individual level data into country level table
Task
Pull country rate of unemployment out of data and store it in a table. Add contextual variable into individual
level file.
Solution:
DATASET ACTIVATE DataSet1.
freq ILOSTAT .
freq country.
* Create dummy unemploy.
Recode ILOSTAT (2= 1) (1=0) (else=sysmiss) into unemploy.
* check.
cross ilostat by unemploy / missing = include.
* display.
means unemploy by country
/CELLS MEAN COUNT
/STATISTICS ANOVA.
* put unemploy rates to table.
DATASET DECLARE unemploy_mean.
* if not presorted it' requred to sort.
SORT CASES BY COUNTRY.
AGGREGATE
/OUTFILE='unemploy_mean'
/PRESORTED
/BREAK=COUNTRY
/unemploy_mean=MEAN(unemploy).
DATASET ACTIVATE unemploy_mean .
*see result.
list .
End of Solution
8. b) Perform regression analysis that includes the macro level variable
Task
Repeat the regression from before and add the aggregate unemployment information.
Solution:
* open the working_pop data set.
DATASET ACTIVATE working_pop .
*note that all countries are included.
freq country.
* add the values from the table on the country level.
MATCH FILES /FILE=*
/TABLE='unemploy_mean'
/BY COUNTRY.
EXECUTE.
*check.
means unemploy_mean by country
/CELLS MEAN COUNT STDDEV
/STATISTICS ANOVA.
* include macro_level variable into regression.
regression VARIABLES = STARTIME sex_male age temp_lm ft unemploy_mean
/DEPENDENT STARTIME
/METHOD enter sex_male age temp_lm ft unemploy_mean .
End of Solution
Ideas for thinking: Adding additional macro level variables; which sources to use? How to include them into
data set?
A typology of countries might be sought. How to build one?
9. Literature:
EUROSTAT (2013): Quality report of the European Union Labour Force Survey 2012 - 2014 edition
http://epp.eurostat.ec.europa.eu/portal/page/portal/product_details/publication?p_product_code=K
S-TC-14-001
EUROSTAT (2013): EU LABOUR FORCE SURVEY EXPLANATORY NOTES
http://epp.eurostat.ec.europa.eu/portal/page/portal/employment_unemployment_lfs/documents/EU_LFS
_explanatory_notes_from_2014_onwards.pdf
MIMAS, The University of Manchester: Countries and Citizens. Linking International Macro
and Micro Data. Unit 4: Study Gide. An Introduction to combining macro and micro data.
https://www.esds.ac.uk/international/elearning/limmd/materials/studyguides/unit4-
studyguide.pdf
Jelle Visser Database on Institutional Characteristics of Trade Unions, Wage Setting, State Intervention
and Social Pacts, ICTWSS. Amsterdam Institute for Advanced Labour Studies (AIAS) University of
Amsterdam, Version 4 – April 2013 http://www.uva-aias.net/207