Mejora del reconocimiento de palabras manuscritas aisladas mediante un clasif...Francisco Zamora-Martinez
El objetivo de este trabajo es mejorar el rendimiento de sistemas de reconocimiento de texto manuscrito \emph{off-line} basados en modelos ocultos de Markov y en modelos hibridados con redes neuronales. Para eliminar el efecto del modelo de lenguaje, se aborda una tarea de reconocimiento de palabras aisladas desprovistas por tanto de su
contexto. Un estudio de la influencia de la longitud de las palabras en el rendimiento del sistema ha llevado a combinar estos clasificadores con otro especializado en palabras cortas y que muestre una menor correlación. Para ello se han entrenado diversos
perceptrones multicapa que clasifican un subconjunto del vocabulario de manera holística. La combinación de clasificadores utilizando una variante del método de votación de Borda ofrece resultados muy satisfactorios.
A New MongoDB Sharding Architecture for Higher Availability and Better Resour...leifwalsh
Most modern databases concern themselves with their ability to scale a workload beyond the power of one machine. But maintaining a database across multiple machines is inherently more complex than it is on a single machine. As soon as scaling out is required, suddenly a lot of scaling out is required, to deal with new problems like index suitability and load balancing.
Write optimized data structures are well-suited to a sharding architecture that delivers higher efficiency than traditional sharding architectures. This talk describes a new sharding architecture for MongoDB applications that can be achieved with write optimized storage like TokuMX's Fractal Tree indexes.
In this paper, we describe a novel approach to Part-Of-Speech tagging based on neural networks. Multilayer perceptrons are used following corpus-based learning from contextual and lexical information. The Penn Treebank corpus has been used for the training and evaluation of the tagging system. The results show that the connectionist approach is feasible and comparable with other approaches.
Connectionist language models offer many advantages over their statistical counterparts, but they also have some drawbacks like a much more expensive computational cost. This paper describes a novel method to overcome this problem. A set of normalization values associated to the most frequent N-grams is pre-computed and the model is smoothed with lower N-gram connectionist or statistical models. The
proposed approach is favourably compared to standard connectionist language models and with statistical back-off language models.
qconsf 2013: Top 10 Performance Gotchas for scaling in-memory Algorithms - Sr...Sri Ambati
Top 10 Performance Gotchas in scaling in-memory Algorithms
Abstract:
Math Algorithms have primarily been the domain of desktop data science. With the success of scalable algorithms at Google, Amazon, and Netflix, there is an ever growing demand for sophisticated algorithms over big data. In this talk, we get a ringside view in the making of the world's most scalable and fastest machine learning framework, H2O, and the performance lessons learnt scaling it over EC2 for Netflix and over commodity hardware for other power users.
Top 10 Performance Gotchas is about the white hot stories of i/o wars, S3 resets, and muxers, as well as the power of primitive byte arrays, non-blocking structures, and fork/join queues. Of good data distribution & fine-grain decomposition of Algorithms to fine-grain blocks of parallel computation. It's a 10-point story of the rage of a network of machines against the tyranny of Amdahl while keeping the statistical properties of the data and accuracy of the algorithm.
Track: Scalability, Availability, and Performance: Putting It All Together
Time: Wednesday, 11:45am - 12:35pm
Write-optimization in external memory data structures (Highload++ 2014)leifwalsh
After a long reign as the dominant on-disk data structure for databases and filesystems, B-trees are slowly being replaced by write-optimized data structures, to handle ever-growing volumes of data. Some write optimization techniques, like LSM-trees, give up some of the query performance of B-trees in order to achieve this.
A Fractal Tree is a write-optimized data structure that matches the insertion performance of an LSM-tree while maintaining the optimal query performance of a B-tree. It's inspired by many data structures (Buffered Repository Trees, B^ε trees, ...) but the real definition is just what we've implemented at Tokutek.
I'll provide background on B-trees and LSM-trees, an overview of how Fractal Trees work, where they differ from B-trees and LSM-trees, and how we use their performance advantages in some obvious and some surprising ways to power new MySQL and MongoDB features in TokuDB and TokuMX.
Write optimization in external memory data structuresleifwalsh
After a long reign as the dominant on-disk data structure for databases and filesystems, B-trees are slowly being replaced by write-optimized data structures, to handle ever-growing volumes of data. Some write optimization techniques, like LSM-trees, give up some of the query performance of B-trees in order to achieve this.
A Fractal Tree is a write-optimized data structure that matches the insertion performance of an LSM-tree while maintaining the optimal query performance of a B-tree. It's inspired by many data structures (Buffered Repository Trees, B^ε trees, ...) but the real definition is just what we've implemented at Tokutek.
I'll provide background on B-trees and LSM-trees, an overview of how Fractal Trees work, where they differ from B-trees and LSM-trees, and how we use their performance advantages in some obvious and some surprising ways to power new MySQL and MongoDB features in TokuDB and TokuMX.
Mejora del reconocimiento de palabras manuscritas aisladas mediante un clasif...Francisco Zamora-Martinez
El objetivo de este trabajo es mejorar el rendimiento de sistemas de reconocimiento de texto manuscrito \emph{off-line} basados en modelos ocultos de Markov y en modelos hibridados con redes neuronales. Para eliminar el efecto del modelo de lenguaje, se aborda una tarea de reconocimiento de palabras aisladas desprovistas por tanto de su
contexto. Un estudio de la influencia de la longitud de las palabras en el rendimiento del sistema ha llevado a combinar estos clasificadores con otro especializado en palabras cortas y que muestre una menor correlación. Para ello se han entrenado diversos
perceptrones multicapa que clasifican un subconjunto del vocabulario de manera holística. La combinación de clasificadores utilizando una variante del método de votación de Borda ofrece resultados muy satisfactorios.
A New MongoDB Sharding Architecture for Higher Availability and Better Resour...leifwalsh
Most modern databases concern themselves with their ability to scale a workload beyond the power of one machine. But maintaining a database across multiple machines is inherently more complex than it is on a single machine. As soon as scaling out is required, suddenly a lot of scaling out is required, to deal with new problems like index suitability and load balancing.
Write optimized data structures are well-suited to a sharding architecture that delivers higher efficiency than traditional sharding architectures. This talk describes a new sharding architecture for MongoDB applications that can be achieved with write optimized storage like TokuMX's Fractal Tree indexes.
In this paper, we describe a novel approach to Part-Of-Speech tagging based on neural networks. Multilayer perceptrons are used following corpus-based learning from contextual and lexical information. The Penn Treebank corpus has been used for the training and evaluation of the tagging system. The results show that the connectionist approach is feasible and comparable with other approaches.
Connectionist language models offer many advantages over their statistical counterparts, but they also have some drawbacks like a much more expensive computational cost. This paper describes a novel method to overcome this problem. A set of normalization values associated to the most frequent N-grams is pre-computed and the model is smoothed with lower N-gram connectionist or statistical models. The
proposed approach is favourably compared to standard connectionist language models and with statistical back-off language models.
qconsf 2013: Top 10 Performance Gotchas for scaling in-memory Algorithms - Sr...Sri Ambati
Top 10 Performance Gotchas in scaling in-memory Algorithms
Abstract:
Math Algorithms have primarily been the domain of desktop data science. With the success of scalable algorithms at Google, Amazon, and Netflix, there is an ever growing demand for sophisticated algorithms over big data. In this talk, we get a ringside view in the making of the world's most scalable and fastest machine learning framework, H2O, and the performance lessons learnt scaling it over EC2 for Netflix and over commodity hardware for other power users.
Top 10 Performance Gotchas is about the white hot stories of i/o wars, S3 resets, and muxers, as well as the power of primitive byte arrays, non-blocking structures, and fork/join queues. Of good data distribution & fine-grain decomposition of Algorithms to fine-grain blocks of parallel computation. It's a 10-point story of the rage of a network of machines against the tyranny of Amdahl while keeping the statistical properties of the data and accuracy of the algorithm.
Track: Scalability, Availability, and Performance: Putting It All Together
Time: Wednesday, 11:45am - 12:35pm
Write-optimization in external memory data structures (Highload++ 2014)leifwalsh
After a long reign as the dominant on-disk data structure for databases and filesystems, B-trees are slowly being replaced by write-optimized data structures, to handle ever-growing volumes of data. Some write optimization techniques, like LSM-trees, give up some of the query performance of B-trees in order to achieve this.
A Fractal Tree is a write-optimized data structure that matches the insertion performance of an LSM-tree while maintaining the optimal query performance of a B-tree. It's inspired by many data structures (Buffered Repository Trees, B^ε trees, ...) but the real definition is just what we've implemented at Tokutek.
I'll provide background on B-trees and LSM-trees, an overview of how Fractal Trees work, where they differ from B-trees and LSM-trees, and how we use their performance advantages in some obvious and some surprising ways to power new MySQL and MongoDB features in TokuDB and TokuMX.
Write optimization in external memory data structuresleifwalsh
After a long reign as the dominant on-disk data structure for databases and filesystems, B-trees are slowly being replaced by write-optimized data structures, to handle ever-growing volumes of data. Some write optimization techniques, like LSM-trees, give up some of the query performance of B-trees in order to achieve this.
A Fractal Tree is a write-optimized data structure that matches the insertion performance of an LSM-tree while maintaining the optimal query performance of a B-tree. It's inspired by many data structures (Buffered Repository Trees, B^ε trees, ...) but the real definition is just what we've implemented at Tokutek.
I'll provide background on B-trees and LSM-trees, an overview of how Fractal Trees work, where they differ from B-trees and LSM-trees, and how we use their performance advantages in some obvious and some surprising ways to power new MySQL and MongoDB features in TokuDB and TokuMX.
Training Deep Neural Networks has been a difficult task for a long time. Recently diverse approaches have been presented to tackle these difficulties, showing that deep models improve the performance of shallow ones in some areas like signal processing, signal classification or signal segmentation, whatever type of signals, e.g. video, audio or images. One of the most important methods is greedy layer-wise unsupervised pre-training followed by a fine-tuning phase. Despite the advantages of this procedure, it does not fit some scenarios where real time learning is needed, as for adaptation of some time-series models. This paper proposes to couple both phases into one, modifying the loss function to mix together the unsupervised and supervised parts. Benchmark experiments with MNIST database prove the viability of the idea for simple image tasks, and experiments with time-series forecasting encourage the incorporation of this idea into on-line learning approaches. The interest of this method in time-series forecasting is motivated by the study of predictive models for domotic houses with intelligent control systems.
Basics of Algorithms and Analysis of algorithm is in there, which includes Time complexity , space complexity, three cases ( best, average, worst) and analysis of Insertion sort.
*For knowledge purpose only*
*Hope you'll come up with better one*
Whether your data's in MySQL, a NoSQL, or somewhere in the cloud, you're likely paying decent money for storage and IOPS. With ever-growing data volumes, and the need for SSDs to cut latency and replication to provide insurance, your storage footprint is an important place to look for savings. It makes sense, then, why so many storage vendors tout compression as a key metric and differentiator.
The language vendors and users employ to reason about storage footprint and compression is embarrassingly vague if not meaningless or downright deceptive, but we can do better, and we must do better.
In this talk, we'll discuss each part of the durable storage stack, from the hardware on up, and how usage numbers can take on different meanings at each layer. We'll talk about what's important to know at each layer, and how to think about and talk about concepts like compression, fragmentation, write amplification, and wear leveling. Finally, we'll see different ways benchmarketers present data to lie to you, and learn some techniques for identifying and cutting through those kinds of lies.
Given at Percona Live Amsterdam 2015
Write optimization in external memory data structuresleifwalsh
After a long reign as the dominant on-disk data structure for databases and filesystems, B-trees are slowly being replaced by write-optimized data structures, to handle ever-growing volumes of data. Some write optimization techniques, like LSM-trees, give up some of the query performance of B-trees in order to achieve this.
A Fractal Tree is a write-optimized data structure that matches the insertion performance of an LSM-tree while maintaining the optimal query performance of a B-tree. It's inspired by many data structures (Buffered Repository Trees, B^ε trees, ...) but the real definition is just what we've implemented at Tokutek.
I'll provide background on B-trees and LSM-trees, an overview of how Fractal Trees work, where they differ from B-trees and LSM-trees, and how we use their performance advantages in some obvious and some surprising ways to power new MySQL and MongoDB features in TokuDB and TokuMX.
ScicomP 2015 presentation discussing best practices for debugging CUDA and OpenACC applications with a case study on our collaboration with LLNL to bring debugging to the OpenPOWER stack and OMPT.
In this paper we propose a family of Viterbi algorithms specialized for lexical tree based FSA and HMM acoustic models. Two algorithms to decode a tree lexicon with left-to-right models with or without skips and other algorithm which takes a directed acyclic graph as input and performs error correcting decoding are presented. They store the set
of active states topologically sorted in contiguous memory queues. The number of basic operations needed to update each hypothesis is reduced and also more locality in memory is obtained reducing the expected number of cache misses and achieving a speed-up over other implementations.
ESAI-CEU-UCH solution for American Epilepsy Society Seizure Prediction ChallengeFrancisco Zamora-Martinez
Presentation given at Cyient Insights (Hyderabad, India).
This work presents the solution proposed by Universidad CEU Cardenal Herrera (ESAI-CEU-UCH) at Kaggle American Epilepsy Society Seizure Prediction Challenge. The proposed solution was positioned as 4th at Kaggle competition.
Different kind of input features (different preprocessing pipelines) and different statistical models are being proposed. This diversity was motivated to improve model combination result.
It is important to note that any of the proposed systems use test set for calibration. The competition allow to do this model calibration using test set, but doing it will reduce the reproducibility of the results in a real world implementation.
Σύνδεση της Εκπαίδευσης και της Παραγωγής μέσω των Γραφείων Διασύνδεσης: Η πε...IDEC SA
Ημερίδα 3 Ιουνίου 2015
"Μεταφορά τεχνολογίας από την έρευνα στη βιομηχανία"
Αμφιθέατρο Α.Ε.Ι. Πειραιά Τ.Τ.
1ο workshop «Παράγοντες επιτυχίας στη μεταφορά τεχνολογίας»
Παρουσίαση: "Σύνδεση της Εκπαίδευσης και της Παραγωγής μέσω των Γραφείων Διασύνδεσης: Η περίπτωση του Γραφείου Διασύνδεσης Α.Ε.Ι. Πειραιά Τ.Τ." – Χρ. Τσίτσης, Αναπλ/της Υπ. Εσωτ. Λειτουργίας, Γρ. Διασύνδεσης Δ.Α.ΣΤΑ Α.Ε.Ι. Πειραιά Τ.Τ.
Training Deep Neural Networks has been a difficult task for a long time. Recently diverse approaches have been presented to tackle these difficulties, showing that deep models improve the performance of shallow ones in some areas like signal processing, signal classification or signal segmentation, whatever type of signals, e.g. video, audio or images. One of the most important methods is greedy layer-wise unsupervised pre-training followed by a fine-tuning phase. Despite the advantages of this procedure, it does not fit some scenarios where real time learning is needed, as for adaptation of some time-series models. This paper proposes to couple both phases into one, modifying the loss function to mix together the unsupervised and supervised parts. Benchmark experiments with MNIST database prove the viability of the idea for simple image tasks, and experiments with time-series forecasting encourage the incorporation of this idea into on-line learning approaches. The interest of this method in time-series forecasting is motivated by the study of predictive models for domotic houses with intelligent control systems.
Basics of Algorithms and Analysis of algorithm is in there, which includes Time complexity , space complexity, three cases ( best, average, worst) and analysis of Insertion sort.
*For knowledge purpose only*
*Hope you'll come up with better one*
Whether your data's in MySQL, a NoSQL, or somewhere in the cloud, you're likely paying decent money for storage and IOPS. With ever-growing data volumes, and the need for SSDs to cut latency and replication to provide insurance, your storage footprint is an important place to look for savings. It makes sense, then, why so many storage vendors tout compression as a key metric and differentiator.
The language vendors and users employ to reason about storage footprint and compression is embarrassingly vague if not meaningless or downright deceptive, but we can do better, and we must do better.
In this talk, we'll discuss each part of the durable storage stack, from the hardware on up, and how usage numbers can take on different meanings at each layer. We'll talk about what's important to know at each layer, and how to think about and talk about concepts like compression, fragmentation, write amplification, and wear leveling. Finally, we'll see different ways benchmarketers present data to lie to you, and learn some techniques for identifying and cutting through those kinds of lies.
Given at Percona Live Amsterdam 2015
Write optimization in external memory data structuresleifwalsh
After a long reign as the dominant on-disk data structure for databases and filesystems, B-trees are slowly being replaced by write-optimized data structures, to handle ever-growing volumes of data. Some write optimization techniques, like LSM-trees, give up some of the query performance of B-trees in order to achieve this.
A Fractal Tree is a write-optimized data structure that matches the insertion performance of an LSM-tree while maintaining the optimal query performance of a B-tree. It's inspired by many data structures (Buffered Repository Trees, B^ε trees, ...) but the real definition is just what we've implemented at Tokutek.
I'll provide background on B-trees and LSM-trees, an overview of how Fractal Trees work, where they differ from B-trees and LSM-trees, and how we use their performance advantages in some obvious and some surprising ways to power new MySQL and MongoDB features in TokuDB and TokuMX.
ScicomP 2015 presentation discussing best practices for debugging CUDA and OpenACC applications with a case study on our collaboration with LLNL to bring debugging to the OpenPOWER stack and OMPT.
In this paper we propose a family of Viterbi algorithms specialized for lexical tree based FSA and HMM acoustic models. Two algorithms to decode a tree lexicon with left-to-right models with or without skips and other algorithm which takes a directed acyclic graph as input and performs error correcting decoding are presented. They store the set
of active states topologically sorted in contiguous memory queues. The number of basic operations needed to update each hypothesis is reduced and also more locality in memory is obtained reducing the expected number of cache misses and achieving a speed-up over other implementations.
ESAI-CEU-UCH solution for American Epilepsy Society Seizure Prediction ChallengeFrancisco Zamora-Martinez
Presentation given at Cyient Insights (Hyderabad, India).
This work presents the solution proposed by Universidad CEU Cardenal Herrera (ESAI-CEU-UCH) at Kaggle American Epilepsy Society Seizure Prediction Challenge. The proposed solution was positioned as 4th at Kaggle competition.
Different kind of input features (different preprocessing pipelines) and different statistical models are being proposed. This diversity was motivated to improve model combination result.
It is important to note that any of the proposed systems use test set for calibration. The competition allow to do this model calibration using test set, but doing it will reduce the reproducibility of the results in a real world implementation.
Σύνδεση της Εκπαίδευσης και της Παραγωγής μέσω των Γραφείων Διασύνδεσης: Η πε...IDEC SA
Ημερίδα 3 Ιουνίου 2015
"Μεταφορά τεχνολογίας από την έρευνα στη βιομηχανία"
Αμφιθέατρο Α.Ε.Ι. Πειραιά Τ.Τ.
1ο workshop «Παράγοντες επιτυχίας στη μεταφορά τεχνολογίας»
Παρουσίαση: "Σύνδεση της Εκπαίδευσης και της Παραγωγής μέσω των Γραφείων Διασύνδεσης: Η περίπτωση του Γραφείου Διασύνδεσης Α.Ε.Ι. Πειραιά Τ.Τ." – Χρ. Τσίτσης, Αναπλ/της Υπ. Εσωτ. Λειτουργίας, Γρ. Διασύνδεσης Δ.Α.ΣΤΑ Α.Ε.Ι. Πειραιά Τ.Τ.
An Online Strategy for the Promotion of Mastiha Product - The emerging Commun...Petros Kavassalis
Ημερίδα, Χϊος 2015 : Το Πανεπιστήμιο Αιγαίου συνεργάζεται με την Ένωση Μαστιχοπαραγωγών Χίου και την Εταιρεία Ελεύθερου Λογισμικού/Λογισμικού Ανοιχτού Κώδικα (ΕΕΛ/ΛΑΚ) με σκοπό τη ανάπτυξη online στρατηγικής για την Μαστίχα!
- ΜΟΝΑΔΕΣ ΑΡΙΣΤΕΙΑΣ ΕΛΛΑΚ (http://ma.ellak.gr)
- Open Community: Mastiha Wonder by mastiholics
Scientix: The community of Science Education in Europe. Examples of resources of the Scientix repository for environmental and health education.Ways of using the Scientix repository for engaging students into STEM careers
STEM Alliance European Project Σύνδεση Σχολικής Κοινότητας και Βιομηχανίας ...Panagiota Argiri
STEM Alliance European Project: Σύνδεση Σχολικής Κοινότητας και Βιομηχανίας. 2o Οpen Campus του Ευρωπαϊκού έργου Developing and Evaluating Skills for Creativity and Innovation (DESCI), που διοργανώθηκε από τον οργανισμό επικοινωνίας της επιστήμης Science View σε συνεργασία με τo Τμήμα Φιλοσοφίας, Παιδαγωγικής και Ψυχολογίας του Εθνικού και Καποδιστριακού Πανεπιστημίου Αθηνών και το 1ο Πειραματικό Γυμνάσιο Αθηνών, 8- 9 Δεκεμβρίου 2017.
Διδακτέα - Εξεταστέα ύλη για το μάθημα "Οικονομία" (ΑΟΘ) της Γ τάξης του Επαγγελματικού λυκείου. Μπορείτε να δείτε και αναλυτικά την ύλη του μαθήματος επιλέγοντας τον παρακάτω σύνδεσμο:
https://view.genially.com/6450d17ad94e2600194eb286
Weatherman 1-hour Speed Course for Web [2024]Andreas Batsis
Εκλαϊκευμένη Διδασκαλία Μετεωρολογίας. Η συγκεκριμένη παρουσίαση παρέχει συνοπτικά το 20% της πληροφορίας σχετικά με το πως λειτουργεί ο καιρός, η οποία πληροφορία θα παρέχει στον αναγνώστη τη δυνατότητα να ερμηνεύει το 80% των καιρικών περιπτώσεων με τη χρήση ιντερνετικών εργαλείων. Η λογική της παρουσίασης βασίζεται κατά κύριο λόγο στην εφαρμογή και δευτερευόντως στην επιστημονική ερμηνεία η οποία περιορίζεται στα απολύτως απαραίτητα.
Αρχές Οικονομικής Θεωρίας - Το γραπτό των πανελλαδικών εξετάσεωνPanagiotis Prentzas
Αρχές Οικονομικής Θεωρίας (ΑΟΘ): Τι πρέπει να προσέξουν οι υποψήφιοι κατά τη διάρκεια των πανελλαδικών εξετάσεων στη δομή των απαντήσεών τους, αλλά και στην εμφάνιση του γραπτού τους.
Μπορείτε να δείτε και τη διαδραστική παρουσίαση στο www.study4economy.edu.gr.
1. @pospaseis: Υπηρεσία αναζήτησης
αποσπάσεων εκπαιδευτικών
10ο Πανελλήνιο
& Διεθνές Συνέδριο
“Οι ΤΠΕ στην Εκπαίδευση”
Ιωάννινα, 23-25
Σεπτεμβρίου 2016
Στέφανος Ουγιάρογλου
stoug@{uom,it.teithe}.gr
Τμήμα Μηχ. Πληροφορικής
Αλεξάνδρειο ΤΕΙ Θεσσαλονίκης
Γεώργιος Ευαγγελίδης
gevan@uom.gr
Τμήμα Εφαρμοσμένης Πληροφορικής
Πανεπιστήμιο Μακεδονίας
2. HCICTE 2016 http://apospaseis.ellak.gr
2
Αποσπάσεις εκπαιδευτικών...
● Χιλιάδες εκπαιδευτικοί στρέφονται προς την προσωρινή λύση της ετήσιας
απόσπασης είτε σε σχολείο είτε σε κάποιο φορέα του ΥΠΠΑΙΘ ώστε να βρίσκονται
στον τόπο συμφερόντων τους
● Οι αιτούντες απόσπαση κάθε χρόνο καλούνται να αποφασίσουν:
- Αν αιτηθούν αίτηση σε φορέα του ΥΠΠΑΙΘ. Αν ναι, σε ποιον;
- Αν αιτηθούν αίτηση σε ΠΥΣΔΕ/ΠΥΣΠΕ. Αν ναι σε ποια; και με ποια
σειρά θα τα τοποθετήσουν στην αίτηση τους;
● Για να απαντηθούν οι ερωτήσεις, πολλοί εκπαιδευτικοί ανατρέχουν στις αποφάσεις
αποσπάσεων παλαιότερων σχολικών ετών
3. HCICTE 2016 http://apospaseis.ellak.gr
3
Το πρόβλημα...
● Οι αποσπάσεις εκπαιδευτικών ανακοινώνονται κατά τη διάρκεια του δεύτερου
εξάμηνου κάθε έτους μέσω ενός μεγάλου αριθμού υπουργικών αποφάσεων
● Πάνω από 70 αρχεία διαφορετικού τύπου (pdf, xls, xlsx, doc, docx, html) με
αποφάσεις αποσπάσεων ανεβαίνουν στο διαδικτυακό τόπο του ΥΠΠΑΙΘ κάθε
χρόνο
● Τα αρχεία δεν ακολουθούν συγκεκριμένες προδιαγραφές:
- Αποφάσεις που έχουν πίνακες με ονόματα και άλλες όχι
- Διαφορετικός τρόπος καταχώρισης της ίδιας έννοιας
- Αποφάσεις που περιλαμβάνουν τα ονόματα πατρός των εκπαιδευτικών και
άλλες που δεν τα περιλαμβάνουν
- κ.α.
● Σημαντική πληροφορία αναφορικά με τις αποσπάσεις παραμένει “κρυμμένη”
στην πληθώρα αρχείων
4. HCICTE 2016 http://apospaseis.ellak.gr
4
Συνεισφορά (1/2)
Πρόσκληση της ΕΔΕΤ Α.Ε. για
“Χρηματική Ενίσχυση για έργα ανάπτυξης ΕΛ/ΛΑΚ”
στα πλαίσια του έργου
"Ηλεκτρονικές Υπηρεσίες για την Ανάπτυξη και Διάδοση του Ανοιχτού Λογισμικού"
5. HCICTE 2016 http://apospaseis.ellak.gr
5
Συνεισφορά (2/2)
● @pospaseis (http://apospaseis.ellak.gr):
- Υπηρεσία που διευκολύνει τους εκπαιδευτικούς στο να ανακαλύψουν την
κρυμμένη πληροφορία που κρύβουν τα πολυάριθμα και χωρίς
συγκεκριμένες προδιαγραφές αρχεία αποφάσεων απόσπασης
- Μηχανής αναζήτησης πολλαπλών κριτηρίων που επιτρέπει το χρήστη να
αναζητήσει στοχευμένα τις πληροφορίες που τον ενδιαφέρουν
- Ο χρήστης μπορεί να “ανακαλύψει”, για παράδειγμα, πόσοι, ποιοι και τι
ειδικότητας εκπαιδευτικοί αποσπάστηκαν σε συγκεκριμένο φορέα
- Η υπηρεσία δίνει τη δυνατότητα στους χρήστες της να πλοηγηθούν στα
προτότυπα αρχεία αποφάσεων του ΥΠΠΑΙΘ
7. HCICTE 2016 http://apospaseis.ellak.gr
7
Προγραμματιστές; Γιατί;
● Το @pospaseis προσφέρει API που παρέχει έναν εύχρηστο τρόπο ανάκτησης
των αποτελεσμάτων μιας αναζήτησης σε μορφή JSON
● Δίνεται η δυνατότητα σε προγραμματιστές να πραγματοποιήσει αναζητήσεις
μέσα από την εφαρμογή του ή το site του
● Με αυτό τον τρόπο, στα δεδομένα των αποσπάσεων προσδίδεται η λογική των
ανοιχτών δεδομένων
23. HCICTE 2016 http://apospaseis.ellak.gr
23
Αναζητήσεις μέσω του API
● Αρκεί ένα http αίτημα στο script που βρίσκεται στη διεύθυνση:
http://www.apospaseis.eu/api.php
● Παραδείγματα:
Αναζήτηση των αποσπάσεων του έτους 2015-2016 των εκπαιδευτικών πληροφορικής (ΠΕ19-
20) που έγιναν στα Πανεπιστήμια-ΤΕΙ
http://www.apospaseis.eu/api.php?search_type=search&eidikotita=ΠΕ19-20&type=Πανεπιστήμια-
ΤΕΙ&year_apospasi=2015-2016
Αναζήτηση στατιστικών για τις αποσπάσεις των εκπαιδευτικών πληροφορικής (ΠΕ19-20)
http://www.apospaseis.eu/api.php?search_type=statistics&eidikotita=ΠΕ19-20
Αναζήτηση των αποσπάσεων που έχουν πάρει οι εκπαιδευτικοί με επώνυμο που περιέχει του
χαρακτήρες "παδόπουλο"
http://www.apospaseis.eu/api.php?search_type=search&lastname=παδόπουλο
Αναζήτηση των αποσπάσεων που έχουν πάρει ο εκπαιδευτικός με αριθμό μητρώου 567891
http://www.apospaseis.eu/api.php?search_type=search&am=567891
24. HCICTE 2016 http://apospaseis.ellak.gr
24
Κατευθύνσεις για επεκτάσεις (1/2)
● Τα διαφορετικά τμήματα του ΥΠΠΑΙΘ είναι αρμόδια για την έκδοση αποφάσεων
συγκεκριμένου τύπου. Αποτέλεσμα: εκδίδουν εντελώς διαφορετικές αποφάσεις
μεταξύ τους
● Αν τα δεδομένα των αποσπάσεων ήταν διαθέσιμα σε συγκεκριμένη μορφή, θα
ήταν δυνατή η ανάπτυξη ενός αυτόματου μηχανισμού τροφοδότησης της ΒΔ του
@pospaseis με δεδομένα νέων αποσπάσεων
● Για να επιτευχθεί κάτι τέτοιο απαιτείται ο συντονισμός όλων των τμημάτων του
ΥΠΠΑΙΘ ώστε να εκδίδουν αρχεία αποφάσεων με συγκεκριμένες προδιαγραφές
● Αν υπήρχε ένας τέτοιος αυτόματος μηχανισμός, δεν θα υπήρχε ανάγκη
ανθρώπινης παρέμβασης στη διαδικασία τροφοδότησης της ΒΔ
26. HCICTE 2016 http://apospaseis.ellak.gr
26
Συμπεράσματα
● Το @pospaseis αποτελεί ένα εργαλείο για τον εκπαιδευτικό που θα τον
διευκολύνει στο να επιλέξει το φορέα ή την περιοχή (ΠΥΣΔΕ, ΠΥΣΠΕ) που θα
αιτηθεί απόσπαση
● Ένας πιο φιλόδοξος στόχος είναι το @pospaseis να συμβάλλει στη διαφάνεια
στη διαδικασία των αποσπάσεων