Probabilistic Graphical Models 輪読会 §3.3-3.4Yuki Yoshida
D. Koller "Probabilistic Graphical Models" の輪読会での発表資料です。
3.3 Independencies in Graphs
3.4 From Distributions to Graphs
の内容に沿っています。
http://wbawakate.connpass.com/event/31613/
Debian Linux on Zynq (Xilinx ARM-SoC FPGA) Setup Flow (Vivado 2015.4)Shinya Takamaeda-Y
The document describes the process to set up Debian Linux on a Zynq FPGA board using a Zybo board as a reference platform. The key steps include:
1. Developing the hardware design in Vivado, including adding a CPU, GPIO for LEDs and switches, and generating a bitstream;
2. Compiling U-boot and the Linux kernel, as well as creating a device tree and root filesystem;
3. Setting up an SD card and booting the system from the SD card.
This document provides information about using high-level programming languages to generate hardware implementations on FPGAs. It discusses how high-level synthesis (HLS) can be used to synthesize register transfer level (RTL) descriptions from C/C++ or Python code. This allows hardware to be programmed at a higher level of abstraction without having to manually write RTL code. Specific HLS tools mentioned include Xilinx Vivado HLS, Altera OpenCL, Veriloggen for Python, and synthesizing hardware from languages like C, C++, Java, and Python.
Probabilistic Graphical Models 輪読会 §3.3-3.4Yuki Yoshida
D. Koller "Probabilistic Graphical Models" の輪読会での発表資料です。
3.3 Independencies in Graphs
3.4 From Distributions to Graphs
の内容に沿っています。
http://wbawakate.connpass.com/event/31613/
Debian Linux on Zynq (Xilinx ARM-SoC FPGA) Setup Flow (Vivado 2015.4)Shinya Takamaeda-Y
The document describes the process to set up Debian Linux on a Zynq FPGA board using a Zybo board as a reference platform. The key steps include:
1. Developing the hardware design in Vivado, including adding a CPU, GPIO for LEDs and switches, and generating a bitstream;
2. Compiling U-boot and the Linux kernel, as well as creating a device tree and root filesystem;
3. Setting up an SD card and booting the system from the SD card.
This document provides information about using high-level programming languages to generate hardware implementations on FPGAs. It discusses how high-level synthesis (HLS) can be used to synthesize register transfer level (RTL) descriptions from C/C++ or Python code. This allows hardware to be programmed at a higher level of abstraction without having to manually write RTL code. Specific HLS tools mentioned include Xilinx Vivado HLS, Altera OpenCL, Veriloggen for Python, and synthesizing hardware from languages like C, C++, Java, and Python.
This document discusses building a finite state transducer (FST) for efficient dictionary lookups during tokenization. It describes building the FST by iterating through a word list, freezing states when word suffixes differ, and merging equivalent states. The built FST is then compiled into a program that can be executed by a virtual machine to lookup words. The program represents the FST as a list of instructions including transition characters and output values. By running the program backwards, it simulates traversing the FST from a word to an output.
This document discusses key concepts in natural language processing including parse trees, part-of-speech tags, and dependency trees. It also contains mathematical formulas for Charles' Law and the ideal gas law, along with their variables and constants described in short phrases.
Technical term extraction aims to automatically identify important terms in scientific papers to help analyze the meaning of texts. It uses a machine learning model called CRF that leverages existing scientific text corpora and bilingual lexicons to recognize terms. The identified terms are then applied in natural language processing tasks involving scientific papers.
The document discusses semantic enrichment of mathematical expressions by associating semantic tags using MathML to describe structure and content, and applying statistical machine translation to automatically extract translation rules and introduce segmentation rules to segment expressions, combining both types of rules to strengthen the translation system and improve over prior rule-based systems.
This document discusses using eye tracking data to diagnose cognitive attributes and readability levels. It examines how factors like technicality, lexical perplexity, syntactic complexity, semantic consistency, background knowledge, native language, emotional state and working memory can influence eye movements and aid in recognizing personal attributes. The diagnosis also considers how these various cognitive and contextual elements impact readability.
This document discusses composing word meanings from sub-word components using deep learning. It notes that while vectors can be used to construct word spaces, all new words share the same representation and appear identical. However, humans can generalize the meaning of new words like "minced-tuna" by understanding the individual meanings of "mince" and "tuna". The document suggests using deep learning to compose a word's meaning from its sub-word parts to better represent new words.
This document discusses associating gaze information with human reading strategies. It describes using natural language processing technologies and reading behavior clues like word length and frequency to predict reading strategies, such as fixation and skipping, with 95% similarity to observed reader data. The goal is to better understand general reading strategies regardless of individual differences. It also discusses using a conditional random field model and gaze features to optimize comma placement in text for improved readability.
1) The paper proposes a co-ranking framework to adapt graph-based ranking to tweet recommendation by simultaneously ranking tweets and their authors.
2) The co-ranking algorithm considers popularity, personalization based on user interests, and diversity to avoid closely connected nodes having only high scores.
3) An evaluation on a large Twitter dataset from 2011 shows the co-ranking approach improves tweet recommendation over baselines by 18.3% in DCG and 7.8% in MAP.
This document discusses building a finite state transducer (FST) for efficient dictionary lookups during tokenization. It describes building the FST by iterating through a word list, freezing states when word suffixes differ, and merging equivalent states. The built FST is then compiled into a program that can be executed by a virtual machine to lookup words. The program represents the FST as a list of instructions including transition characters and output values. By running the program backwards, it simulates traversing the FST from a word to an output.
This document discusses key concepts in natural language processing including parse trees, part-of-speech tags, and dependency trees. It also contains mathematical formulas for Charles' Law and the ideal gas law, along with their variables and constants described in short phrases.
Technical term extraction aims to automatically identify important terms in scientific papers to help analyze the meaning of texts. It uses a machine learning model called CRF that leverages existing scientific text corpora and bilingual lexicons to recognize terms. The identified terms are then applied in natural language processing tasks involving scientific papers.
The document discusses semantic enrichment of mathematical expressions by associating semantic tags using MathML to describe structure and content, and applying statistical machine translation to automatically extract translation rules and introduce segmentation rules to segment expressions, combining both types of rules to strengthen the translation system and improve over prior rule-based systems.
This document discusses using eye tracking data to diagnose cognitive attributes and readability levels. It examines how factors like technicality, lexical perplexity, syntactic complexity, semantic consistency, background knowledge, native language, emotional state and working memory can influence eye movements and aid in recognizing personal attributes. The diagnosis also considers how these various cognitive and contextual elements impact readability.
This document discusses composing word meanings from sub-word components using deep learning. It notes that while vectors can be used to construct word spaces, all new words share the same representation and appear identical. However, humans can generalize the meaning of new words like "minced-tuna" by understanding the individual meanings of "mince" and "tuna". The document suggests using deep learning to compose a word's meaning from its sub-word parts to better represent new words.
This document discusses associating gaze information with human reading strategies. It describes using natural language processing technologies and reading behavior clues like word length and frequency to predict reading strategies, such as fixation and skipping, with 95% similarity to observed reader data. The goal is to better understand general reading strategies regardless of individual differences. It also discusses using a conditional random field model and gaze features to optimize comma placement in text for improved readability.
1) The paper proposes a co-ranking framework to adapt graph-based ranking to tweet recommendation by simultaneously ranking tweets and their authors.
2) The co-ranking algorithm considers popularity, personalization based on user interests, and diversity to avoid closely connected nodes having only high scores.
3) An evaluation on a large Twitter dataset from 2011 shows the co-ranking approach improves tweet recommendation over baselines by 18.3% in DCG and 7.8% in MAP.