Amnesty International : language in actionLRC - Language Resource Centre Grupo Inmigra i+d
Amnesty International is a global movement of over 3 million people in more than 150 countries who campaign to end human rights abuses. It works independently of any government, political ideology, economic interest or religion and is mainly funded by membership and public donations. The organization discusses using languages effectively to communicate its message and defending human rights through aligned projects, with a shift from separating core and non-core languages to prioritizing strategic and tactical projects in many languages including English. It also summarizes its work within the Language Resource Centre including translation, interpretation, terminology, tools, and localizing productions across 50 languages through in-house experts and freelancers.
Muhammad Yunus, the 2006 Nobel Peace Prize winner and founder of Grameen Bank, discusses his vision for the world in 2050 in his book "Creating a World Without Poverty". He believes that by 2050, poverty will be eliminated from the world through social business and microcredit programs that empower people with very little resources. Local communities around the world will be self-sustaining using innovative solutions tailored to their needs. Information and communication technologies will help connect people globally to access education, healthcare and financial services.
The document discusses Akoma Ntoso, an open legal XML standard for parliamentary and legal documents. It describes Akoma Ntoso's structures for organizing legal documents and their metadata in XML, allowing documents to be searched, displayed, and linked across repositories and countries. Key features include identifying a document's parts, semantic descriptions of content, and mechanisms like FRBR and Top Level Classes for cross-referencing concepts and versions unambiguously.
A Repository of Free Lexical Resources for African Languages: The Project and...Guy De Pauw
FreeDict is an open source project that hosts free bilingual dictionaries for African languages. Dictionaries uploaded to FreeDict can be encoded using XML standards like TEI P5, and can then be accessed through desktop clients or a Firefox add-on. The article discusses the development process for dictionaries on FreeDict, from simple glossaries to more complex machine-readable formats, using the example of the Swahili-English dictionary. Future plans include adding more XML features and tools to facilitate dictionary development and access.
The document discusses multimedia architecture and markup languages. It describes two common multimedia architectures - monolithic and shell architectures. It then covers markup languages, including HTML, XML, and SGML. XML is presented as being based on SGML but simplified for use on the web. Stylesheets are described as separating document contents from style to allow flexibility.
The document discusses the multilingual web and internationalization (I18N) and localization (L10N) topics. It covers traditional topics like language tags and internationalized domain names. It also discusses newer topics like the "long tail" effect and its consequences for a multilingual web with more specific content and services. Metadata standards like XLIFF and ITS 1.0 help bridge technology gaps and make localization of long tail content easier and more affordable. The document advocates for content authors and developers to use standards like ITS to better enable computer-aided translation tools.
Amnesty International : language in actionLRC - Language Resource Centre Grupo Inmigra i+d
Amnesty International is a global movement of over 3 million people in more than 150 countries who campaign to end human rights abuses. It works independently of any government, political ideology, economic interest or religion and is mainly funded by membership and public donations. The organization discusses using languages effectively to communicate its message and defending human rights through aligned projects, with a shift from separating core and non-core languages to prioritizing strategic and tactical projects in many languages including English. It also summarizes its work within the Language Resource Centre including translation, interpretation, terminology, tools, and localizing productions across 50 languages through in-house experts and freelancers.
Muhammad Yunus, the 2006 Nobel Peace Prize winner and founder of Grameen Bank, discusses his vision for the world in 2050 in his book "Creating a World Without Poverty". He believes that by 2050, poverty will be eliminated from the world through social business and microcredit programs that empower people with very little resources. Local communities around the world will be self-sustaining using innovative solutions tailored to their needs. Information and communication technologies will help connect people globally to access education, healthcare and financial services.
The document discusses Akoma Ntoso, an open legal XML standard for parliamentary and legal documents. It describes Akoma Ntoso's structures for organizing legal documents and their metadata in XML, allowing documents to be searched, displayed, and linked across repositories and countries. Key features include identifying a document's parts, semantic descriptions of content, and mechanisms like FRBR and Top Level Classes for cross-referencing concepts and versions unambiguously.
A Repository of Free Lexical Resources for African Languages: The Project and...Guy De Pauw
FreeDict is an open source project that hosts free bilingual dictionaries for African languages. Dictionaries uploaded to FreeDict can be encoded using XML standards like TEI P5, and can then be accessed through desktop clients or a Firefox add-on. The article discusses the development process for dictionaries on FreeDict, from simple glossaries to more complex machine-readable formats, using the example of the Swahili-English dictionary. Future plans include adding more XML features and tools to facilitate dictionary development and access.
The document discusses multimedia architecture and markup languages. It describes two common multimedia architectures - monolithic and shell architectures. It then covers markup languages, including HTML, XML, and SGML. XML is presented as being based on SGML but simplified for use on the web. Stylesheets are described as separating document contents from style to allow flexibility.
The document discusses the multilingual web and internationalization (I18N) and localization (L10N) topics. It covers traditional topics like language tags and internationalized domain names. It also discusses newer topics like the "long tail" effect and its consequences for a multilingual web with more specific content and services. Metadata standards like XLIFF and ITS 1.0 help bridge technology gaps and make localization of long tail content easier and more affordable. The document advocates for content authors and developers to use standards like ITS to better enable computer-aided translation tools.
A Logic-Based Approach To Semantic Information ExtractionAmber Ford
The document describes a logic-based approach to semantic information extraction from unstructured documents. It represents documents as a two-dimensional plane composed of nested rectangular regions called portions. Each portion contains a piece of text annotated according to an ontology. It uses DLP+, an extension of DLP with object-oriented features, to represent the ontology and encode extraction patterns as rules. The patterns are used to automatically extract semantic information from documents by associating portions with ontology elements. The approach allows extracting information according to semantics rather than just syntax, and can extract from different document formats like text and HTML. It enables semantic classification of documents for applications like email filtering and skills extraction from resumes.
This workshop introduces the use of concept mapping (not mind mapping!) for identifying structure in complex texts, and for creating structure as you write. Cmap Tools is a freeware that is very suitable for structure work related to your writing. Visit https://cmap.ihmc.us/ to download Cmap Tools freeware and study with their excellent resources.
This document discusses business rules and their use in financial reporting using XBRL. It begins by defining key concepts like semantics, metadata, and business rules. It explains that business rules express the semantic meaning of financial data and reports. The document then provides examples of how business rules can be used to express financial reporting relationships, calculations, and disclosure requirements. It argues that expressing business rules in a standardized way through XBRL can improve financial analysis and reporting.
This document discusses human translation workflow and contains three sections. Section I provides an overview of human translation workflow. Section II discusses professional translation, including market studies, emerging trends, and the translation workflow. Section III focuses on corpus-based translation, outlining guidelines for corpus creation, using corpora for translation training, and concordancing tools.
This document provides an introduction to XBRL, including:
- The structure of the lesson, which covers theory, a status update, and more theory.
- XBRL is not new technology, but specifying it for financial reporting is new. Its adoption has been slower than expected.
- XBRL relates to areas like business process management, business process reengineering, culture/change management, finance, and auditing.
- It defines XBRL as an XML-based language for exchanging business information in a standardized way.
The document provides an introduction to web services, including their origins, characteristics, life cycle, requirements, and advantages/disadvantages. It discusses how web services use XML, SOAP, WSDL, and UDDI to allow programs to communicate over the web. The document also introduces XML, describing its structure, elements, attributes, and validation using DTDs.
The document summarizes the use of computer lexica in optical character recognition (OCR) and information retrieval. It discusses what a computer lexicon is, how it differs from an electronic dictionary, and examples of lexica built for Dutch texts in the IMPACT project. Lexica help improve OCR accuracy and enable more advanced searching of text corpora by accounting for spelling variants. The project achieved significant error reductions in OCR of Dutch historical texts by using tailored lexica.
eXtensible Business Reporting Language (XBRL) is an extended XML, a tagged data (meta-data) which is machine readable and a standard way to communicate business & financial info.
This presentation introduces XBRL & MAIA Intelligence's postXBRL solution with BI for financial reporting.
XML evolved from EDI to provide a standard format for data exchange that is not dependent on technologies or platforms. It addresses issues with HTML such as lack of structure, validation, and suitability for representing data. XML allows data to be tagged and organized hierarchically to represent relationships and enable both human- and machine-readable interpretation.
The document discusses the architecture and design of the Federal Digital System (FDsys), including its system architecture, data model, application architecture, ingest process, data processing and search features. Key aspects include using a data-driven architecture to group content into packages and extract metadata, which is then used for search and delivery of content.
Ofbiz is an open source enterprise automation software project written in Java that can be used for ERP, CRM, e-commerce, and supply chain management applications. It uses a service engine to interact with business logic and an entity engine to interact with databases in a relational manner. The framework and applications are built on a flexible data model and can integrate third party tools. The document introduces Ofbiz and reflects on whether it may be a good fit depending on needs and support considerations.
The document discusses XML and related technologies like XML databases and MPEG-7. It defines XML and describes how XML documents can be stored and queried using native XML databases. It also explains the key components and applications of the MPEG-7 standard for describing multimedia content.
110 Introduction To Xbrl Taxonomies And Instance Documents Sept 2007 Print Ve...helggeist
The document provides an overview of XBRL (eXtensible Business Reporting Language) and compares it to XML. It discusses XBRL taxonomies, which define reporting concepts and relationships, and XBRL instance documents, which contain reported facts that are constrained by the taxonomy. While XML provides a basis, XBRL was created to address XML's limitations for business reporting by allowing flexible extension of reporting structures and validating semantics and business rules, not just syntax.
XML is a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable. It was designed to carry data, not display it like HTML. XML is important because it separates data from presentation, allows data to be shared across different systems, and makes data easier to store and process. The basic building blocks of XML include elements, attributes, entities, processing instructions, comments, and tags.
This document discusses conceptualizations of multinational corporations (MNCs) within international business and management research. It notes that MNCs have been a primary driver of globalization. The approaches taken to understand MNCs vary as there are multiple perspectives. Prominent models for understanding MNC evolution include internalization theory and the eclectic paradigm, though these have been criticized. Emerging economies internationalizing their operations provides a new perspective in MNC research.
The document provides an introduction to XML, including:
1. XML is a markup language that allows users to define their own tags to describe data, unlike HTML which has predefined tags.
2. XML uses DTDs or schemas to define the structure and elements of an XML document.
3. Namespaces are used in XML to distinguish identically named elements and avoid collisions between elements from different vocabularies. Namespaces are assigned a URI to uniquely identify them.
Este documento describe el diseño y la formación de evaluadores para una prueba de eficacia comunicativa oral en español (PECOLE) para trabajadores inmigrantes en la Comunidad de Madrid. Explica las fases del proyecto, que incluyen el análisis de necesidades, el diseño de la prueba y las especificaciones, y la formación de evaluadores. La prueba evalúa la capacidad de los candidatos para comunicarse oralmente en español a nivel A1 mediante una entrevista y usa una escala de calificación de apt
Wanted: Best Practices for Collaborative TranslationGrupo Inmigra i+d
This document discusses collaborative translation and outlines some common issues. It begins with a brief history of collaborative translation approaches from 2005-2011. It then outlines different flavors of collaborative translation like crowdsourcing, terminology resources, and translation memory sharing. Common challenges are discussed such as alignment with business goals, quality control, crowd motivation, and defining the professional role. The talk concludes that capturing best practices for collaborative translation in the form of design patterns would be useful.
More Related Content
Similar to Collaborative Translation in not-for-profit organizations
A Logic-Based Approach To Semantic Information ExtractionAmber Ford
The document describes a logic-based approach to semantic information extraction from unstructured documents. It represents documents as a two-dimensional plane composed of nested rectangular regions called portions. Each portion contains a piece of text annotated according to an ontology. It uses DLP+, an extension of DLP with object-oriented features, to represent the ontology and encode extraction patterns as rules. The patterns are used to automatically extract semantic information from documents by associating portions with ontology elements. The approach allows extracting information according to semantics rather than just syntax, and can extract from different document formats like text and HTML. It enables semantic classification of documents for applications like email filtering and skills extraction from resumes.
This workshop introduces the use of concept mapping (not mind mapping!) for identifying structure in complex texts, and for creating structure as you write. Cmap Tools is a freeware that is very suitable for structure work related to your writing. Visit https://cmap.ihmc.us/ to download Cmap Tools freeware and study with their excellent resources.
This document discusses business rules and their use in financial reporting using XBRL. It begins by defining key concepts like semantics, metadata, and business rules. It explains that business rules express the semantic meaning of financial data and reports. The document then provides examples of how business rules can be used to express financial reporting relationships, calculations, and disclosure requirements. It argues that expressing business rules in a standardized way through XBRL can improve financial analysis and reporting.
This document discusses human translation workflow and contains three sections. Section I provides an overview of human translation workflow. Section II discusses professional translation, including market studies, emerging trends, and the translation workflow. Section III focuses on corpus-based translation, outlining guidelines for corpus creation, using corpora for translation training, and concordancing tools.
This document provides an introduction to XBRL, including:
- The structure of the lesson, which covers theory, a status update, and more theory.
- XBRL is not new technology, but specifying it for financial reporting is new. Its adoption has been slower than expected.
- XBRL relates to areas like business process management, business process reengineering, culture/change management, finance, and auditing.
- It defines XBRL as an XML-based language for exchanging business information in a standardized way.
The document provides an introduction to web services, including their origins, characteristics, life cycle, requirements, and advantages/disadvantages. It discusses how web services use XML, SOAP, WSDL, and UDDI to allow programs to communicate over the web. The document also introduces XML, describing its structure, elements, attributes, and validation using DTDs.
The document summarizes the use of computer lexica in optical character recognition (OCR) and information retrieval. It discusses what a computer lexicon is, how it differs from an electronic dictionary, and examples of lexica built for Dutch texts in the IMPACT project. Lexica help improve OCR accuracy and enable more advanced searching of text corpora by accounting for spelling variants. The project achieved significant error reductions in OCR of Dutch historical texts by using tailored lexica.
eXtensible Business Reporting Language (XBRL) is an extended XML, a tagged data (meta-data) which is machine readable and a standard way to communicate business & financial info.
This presentation introduces XBRL & MAIA Intelligence's postXBRL solution with BI for financial reporting.
XML evolved from EDI to provide a standard format for data exchange that is not dependent on technologies or platforms. It addresses issues with HTML such as lack of structure, validation, and suitability for representing data. XML allows data to be tagged and organized hierarchically to represent relationships and enable both human- and machine-readable interpretation.
The document discusses the architecture and design of the Federal Digital System (FDsys), including its system architecture, data model, application architecture, ingest process, data processing and search features. Key aspects include using a data-driven architecture to group content into packages and extract metadata, which is then used for search and delivery of content.
Ofbiz is an open source enterprise automation software project written in Java that can be used for ERP, CRM, e-commerce, and supply chain management applications. It uses a service engine to interact with business logic and an entity engine to interact with databases in a relational manner. The framework and applications are built on a flexible data model and can integrate third party tools. The document introduces Ofbiz and reflects on whether it may be a good fit depending on needs and support considerations.
The document discusses XML and related technologies like XML databases and MPEG-7. It defines XML and describes how XML documents can be stored and queried using native XML databases. It also explains the key components and applications of the MPEG-7 standard for describing multimedia content.
110 Introduction To Xbrl Taxonomies And Instance Documents Sept 2007 Print Ve...helggeist
The document provides an overview of XBRL (eXtensible Business Reporting Language) and compares it to XML. It discusses XBRL taxonomies, which define reporting concepts and relationships, and XBRL instance documents, which contain reported facts that are constrained by the taxonomy. While XML provides a basis, XBRL was created to address XML's limitations for business reporting by allowing flexible extension of reporting structures and validating semantics and business rules, not just syntax.
XML is a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable. It was designed to carry data, not display it like HTML. XML is important because it separates data from presentation, allows data to be shared across different systems, and makes data easier to store and process. The basic building blocks of XML include elements, attributes, entities, processing instructions, comments, and tags.
This document discusses conceptualizations of multinational corporations (MNCs) within international business and management research. It notes that MNCs have been a primary driver of globalization. The approaches taken to understand MNCs vary as there are multiple perspectives. Prominent models for understanding MNC evolution include internalization theory and the eclectic paradigm, though these have been criticized. Emerging economies internationalizing their operations provides a new perspective in MNC research.
The document provides an introduction to XML, including:
1. XML is a markup language that allows users to define their own tags to describe data, unlike HTML which has predefined tags.
2. XML uses DTDs or schemas to define the structure and elements of an XML document.
3. Namespaces are used in XML to distinguish identically named elements and avoid collisions between elements from different vocabularies. Namespaces are assigned a URI to uniquely identify them.
Similar to Collaborative Translation in not-for-profit organizations (20)
Este documento describe el diseño y la formación de evaluadores para una prueba de eficacia comunicativa oral en español (PECOLE) para trabajadores inmigrantes en la Comunidad de Madrid. Explica las fases del proyecto, que incluyen el análisis de necesidades, el diseño de la prueba y las especificaciones, y la formación de evaluadores. La prueba evalúa la capacidad de los candidatos para comunicarse oralmente en español a nivel A1 mediante una entrevista y usa una escala de calificación de apt
Wanted: Best Practices for Collaborative TranslationGrupo Inmigra i+d
This document discusses collaborative translation and outlines some common issues. It begins with a brief history of collaborative translation approaches from 2005-2011. It then outlines different flavors of collaborative translation like crowdsourcing, terminology resources, and translation memory sharing. Common challenges are discussed such as alignment with business goals, quality control, crowd motivation, and defining the professional role. The talk concludes that capturing best practices for collaborative translation in the form of design patterns would be useful.
Este documento anuncia una jornada sobre inmigración y comunicación intercultural que tendrá lugar en la Universidad Europea de Madrid. La jornada consistirá en una presentación, una conferencia sobre inmigración y poscolonialismo, una mesa redonda sobre la interculturalidad como herramienta para la integración, y contará con la participación de investigadores y representantes de organizaciones relacionadas con la inmigración. El objetivo es reflexionar sobre la importancia de la interculturalidad para la integración de los inmigrantes.
Este documento describe INMIGRA-TERM, una base de datos terminológica en línea que contiene 200 fichas con los principales términos utilizados por la administración pública de Madrid para comunicarse con inmigrantes sobre temas como residencia, empleo, vivienda, salud, educación y servicios sociales. La base de datos fue creada analizando textos del Portal Inmigra Madrid entre 2008 y 2009 y almacenando las traducciones. Los usuarios pueden buscar términos en INMIGRA-TERM y acceder a información terminoló
El documento habla sobre la traducción e interpretación en relación con la inmigración. Menciona que si las personas aceptan y aprecian a otras con diferencias culturales, estas diferencias dejan de ser un problema. También describe el desarrollo de herramientas electrónicas para la mediación intercultural y una base de datos terminológica llamada INMIGRA-TERM que contiene 200 entradas sobre términos usados en la administración pública para comunicarse con inmigrantes sobre temas como residencia, empleo y servicios sociales.
Este documento presenta un estudio lingüístico multidisciplinar sobre la población inmigrante en la Comunidad de Madrid llevado a cabo por un grupo de investigación formado por expertos en sociolingüística, traducción, medios de comunicación, enseñanza del español como lengua extranjera y gestión de proyectos. El grupo analiza aspectos lingüísticos, socioculturales y educativos relacionados con la inmigración.
Certificación lingüística de nivel inicial para inmigrantes en contexto labor...Grupo Inmigra i+d
El documento describe el diseño y validación de una prueba de comprensión oral para inmigrantes de nivel A2 en español como lengua extranjera. La prueba evalúa las habilidades necesarias para desempeñarse en contextos administrativos y laborales a través de tareas comunicativas basadas en videos. El enfoque considera los procesos, estrategias y conocimientos implicados en la comprensión oral para este grupo. La prueba fue validada con 184 participantes de diversos orígenes y resultó adecuada para sus necesidades y
El uso del vídeo para evaluar la comprensión oral en niveles iniciales”. Encu...Grupo Inmigra i+d
El documento describe el diseño de una prueba de evaluación de la comprensión oral para el examen de certificación DILE. Se basa en un marco teórico que tiene en cuenta el contexto y las necesidades de los aprendices de español como lengua extranjera. La prueba utiliza videos cortos que simulan situaciones de la vida real y preguntas de opción múltiple para evaluar la comprensión de detalles, ideas principales e inferencias. El diseño favorece una evaluación auténtica y coherente con el enfoque comunicativo
Español con fines profesionales: necesidades de comprensión de los inmigrante...Grupo Inmigra i+d
Este documento describe las necesidades lingüísticas de comprensión oral de inmigrantes adultos para desenvolverse en los ámbitos administrativo y laboral en España. Explica que estos aprendices requieren desarrollar microhabilidades como reconocer entonación, procesar información concreta e inferir intención comunicativa. También destaca la importancia de una metodología comunicativa que utilice material audiovisual para ejercitar estas habilidades de manera contextualizada.
La elaboración de textos y preguntas para evaluar la comprensión oral”. XXI C...Grupo Inmigra i+d
El documento habla sobre la elaboración de textos y preguntas para evaluar la comprensión oral. Explica que los textos deben tener validez, fiabilidad y autenticidad. También cubre consideraciones como las características lingüísticas del texto, las microhabilidades evaluadas y el tipo de material usado (audio o audiovisual).
El discurso periodístico sobre la inmigración latinoamericana en España: el c...Grupo Inmigra i+d
El grupo UEM-Medios está formado por 7 investigadores con doctorados y licenciaturas relacionadas con la filología, ciencias de la información y filosofía. Sus tareas para 2010 incluyen ampliar y anotar su corpus de noticias sobre inmigración, publicar análisis del discurso sobre la inmigración en los medios, y elaborar una guía de estilo para el tratamiento de la inmigración en los medios. El grupo forma parte de un proyecto más amplio financiado por la UEM que incluye la investigación ling
• Fases iniciales de la implantación de una certificación lingüística de E/L2 Grupo Inmigra i+d
Este documento describe las fases de pilotaje e implementación de una prueba de certificación de español para inmigrantes en la Comunidad de Madrid. Se realizaron pruebas piloto parciales y completas en varios centros con más de 50 participantes de diversas nacionalidades. Basado en los resultados, se ajustaron aspectos como el diseño, instrucciones e ítems de la prueba. El pilotaje fue útil para validar y mejorar la fiabilidad de la prueba de certificación.
• Diseño de pruebas de evaluación comunicativas de E/L2Grupo Inmigra i+d
El documento describe las tareas comunicativas propuestas para una prueba de evaluación del nivel de español de trabajadores inmigrantes. La prueba incluye tareas de comprensión lectora, comprensión auditiva, expresión e interacción escrita, e interacción y expresión oral evaluando las capacidades lingüísticas necesarias para resolver problemas y cumplir obligaciones laborales y administrativas.
Análisis de necesidades lingüísticas comunicativas de la población inmigranteGrupo Inmigra i+d
La Fase 1 del proyecto analizó las necesidades lingüísticas y comunicativas de los trabajadores inmigrantes en Madrid a través de tres investigaciones: observación etnográfica de sus interacciones, entrevistas a profesionales que los atienden, y encuestas a profesores de español para inmigrantes. Los resultados mostraron la necesidad de un plan de estudios adaptado a sus necesidades reales y la importancia de ciertos contenidos en niveles iniciales de español.
El documento describe las diferentes fases de un programa de formación y certificación lingüística para trabajadores inmigrantes en la Comunidad de Madrid. Incluye el análisis de necesidades comunicativas, el diseño de la prueba de certificación y sus diferentes secciones, así como actividades de difusión del programa. También presenta un portal web para apoyar el aprendizaje y enseñanza del español para inmigrantes.
Criterios pragmáticos y socioculturales para la selección de textos y géneros...Grupo Inmigra i+d
Este documento describe los criterios para seleccionar textos y géneros discursivos en un examen de certificación de español para trabajadores inmigrantes de nivel A2. Se basa en las necesidades comunicativas de los inmigrantes en el trabajo y la vida cotidiana, así como en los estándares del Marco Común Europeo de Referencia. El examen incluye tareas de comprensión lectora, expresión e interacción escrita y oral sobre temas personales, laborales y administrativos usando géneros como correos electrónicos
Variables sociculturales y comunicativas para un diseño curricular de español...Grupo Inmigra i+d
Este documento presenta un proyecto para diseñar un examen de certificación de español para trabajadores inmigrantes de nivel A1-A2. El proyecto analiza variables socioculturales y comunicativas para incluir en un currículo específico de español para este grupo. El examen evaluará competencias lingüísticas y pragmáticas en contextos laborales, administrativos y públicos a través de tareas orales y escritas. El documento describe los objetivos, marco teórico y etapas del proyecto.
Competencia léxica y sociocultural como base de la competencia comunicativa e...Grupo Inmigra i+d
El documento discute la relación entre la competencia léxica y la competencia intercultural en el aprendizaje del español como lengua extranjera. Explica que el conocimiento léxico incluye dimensiones semánticas, pragmáticas, gramaticales y culturales. También analiza cómo el léxico refleja la cultura a través de referentes culturales asociados a palabras. Finalmente, plantea preguntas sobre cómo desarrollar ambas competencias de forma integrada para evitar malentendidos interculturales.
"Implantación de una certificación de nivel A1 + 1 y diseño curricular especí...Grupo Inmigra i+d
El documento describe un estudio sobre el diseño de una certificación de español de nivel A1+1 y un currículo específico para trabajadores inmigrantes. El estudio analiza las necesidades comunicativas y socioculturales de los trabajadores inmigrantes y cómo el Marco Común Europeo de Referencia para las lenguas puede aplicarse para desarrollar las competencias lingüísticas y pragmáticas necesarias. Finalmente, plantea preguntas sobre aspectos como el tratamiento de respeto, simulaciones de entrevistas de trabajo y el alcance l
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Presentation of the OECD Artificial Intelligence Review of Germany
Collaborative Translation in not-for-profit organizations
1. COLLABORATIVE TRANSLATION IN NOT-FOR-PROFIT ORGANIZATIONS RED INMIGRA 14 NOVEMBER 2011 Dra. Celia Rico Pérez Universidad Europea de Madrid [email_address] http://collaborateandtranslate.wordpress.com/