What’s an e1000?
The e1000 is the Intel 82545EM Gigabit Ethernet Controller . VMware offers an emulated version of this controller. Most operating systems are shipped with a 82545EM driver. The 82545EM driver sucks! That’s why Intel replaced it with e1000e aka 82574L
What’s an e1000e?
The e1000e is the Intel 82574L Gigabit Ethernet Controller. In vSphere 5 (HW8), VMware offers an emulated version of the e1000e. Windows 7 and Windows 2008 are shipped with drivers for the 82574L. The 82574L is cool, but is it faster than an VMXNET3?
What’s VMXNET3?
The VMXNET3 adapter is the next generation of Para virtualized NIC designed for performance. The VMXNET3 network adapter is a 10Gb virtual NIC. Drivers are shipped with the VMware tools and most OS are supported. VMXNET3 is much faster than e1000 or e1000e. VMXNET3 has less CPU overhead compared to e1000 or e1000e. VMXNET3 is more stable than e1000 or e1000e
SOSCON 2019.10.17
What are the methods for packet processing on Linux? And how fast are each packet processing methods? In this presentation, we will learn how to handle packets on Linux (User space, socket filter, netfilter, tc), and compare performance with analysis of where each packet processing is done in the network stack (hook point). Also, we will discuss packet processing using XDP, an in-kernel fast-path recently added to the Linux kernel. eXpress Data Path (XDP) is a high-performance programmable network data-path within the Linux kernel. The XDP is located at the lowest level of access through SW in the network stack, the point at which driver receives the packet. By using the eBPF infrastructure at this hook point, the network stack can be expanded without modifying the kernel.
Daniel T. Lee (Hoyeon Lee)
@danieltimlee
Daniel T. Lee currently works as Software Engineer at Kosslab and contributing to Linux kernel BPF project. He has interest in cloud, Linux networking, and tracing technologies, and likes to analyze the kernel's internal using BPF technology.
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpJames Denton
Architecting a private cloud to meet the use cases of its users can be a daunting task. How do you determine which of the many L2/L3 Neutron plugins and drivers to implement? Does network performance outweigh reliability? Are overlay networks just as performant as VLAN networks? The answers to these questions will drive the appropriate technology choice.
In this presentation, we will look at many of the common drivers built around the ML2 framework, including LinuxBridge, OVS, OVS+DPDK, SR-IOV, and more, and will provide performance data to help drive decisions around selecting a technology that's right for the situation. We will discuss our experience with some of these technologies, and the pros and cons of one technology over another in a production environment.
Powerpoint presentation on LINUX scheduling and input and output operations . source of information is operating system concepts 8th edition by abraham silberschatz peter b . galvin greg gagne
What’s an e1000?
The e1000 is the Intel 82545EM Gigabit Ethernet Controller . VMware offers an emulated version of this controller. Most operating systems are shipped with a 82545EM driver. The 82545EM driver sucks! That’s why Intel replaced it with e1000e aka 82574L
What’s an e1000e?
The e1000e is the Intel 82574L Gigabit Ethernet Controller. In vSphere 5 (HW8), VMware offers an emulated version of the e1000e. Windows 7 and Windows 2008 are shipped with drivers for the 82574L. The 82574L is cool, but is it faster than an VMXNET3?
What’s VMXNET3?
The VMXNET3 adapter is the next generation of Para virtualized NIC designed for performance. The VMXNET3 network adapter is a 10Gb virtual NIC. Drivers are shipped with the VMware tools and most OS are supported. VMXNET3 is much faster than e1000 or e1000e. VMXNET3 has less CPU overhead compared to e1000 or e1000e. VMXNET3 is more stable than e1000 or e1000e
SOSCON 2019.10.17
What are the methods for packet processing on Linux? And how fast are each packet processing methods? In this presentation, we will learn how to handle packets on Linux (User space, socket filter, netfilter, tc), and compare performance with analysis of where each packet processing is done in the network stack (hook point). Also, we will discuss packet processing using XDP, an in-kernel fast-path recently added to the Linux kernel. eXpress Data Path (XDP) is a high-performance programmable network data-path within the Linux kernel. The XDP is located at the lowest level of access through SW in the network stack, the point at which driver receives the packet. By using the eBPF infrastructure at this hook point, the network stack can be expanded without modifying the kernel.
Daniel T. Lee (Hoyeon Lee)
@danieltimlee
Daniel T. Lee currently works as Software Engineer at Kosslab and contributing to Linux kernel BPF project. He has interest in cloud, Linux networking, and tracing technologies, and likes to analyze the kernel's internal using BPF technology.
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpJames Denton
Architecting a private cloud to meet the use cases of its users can be a daunting task. How do you determine which of the many L2/L3 Neutron plugins and drivers to implement? Does network performance outweigh reliability? Are overlay networks just as performant as VLAN networks? The answers to these questions will drive the appropriate technology choice.
In this presentation, we will look at many of the common drivers built around the ML2 framework, including LinuxBridge, OVS, OVS+DPDK, SR-IOV, and more, and will provide performance data to help drive decisions around selecting a technology that's right for the situation. We will discuss our experience with some of these technologies, and the pros and cons of one technology over another in a production environment.
Powerpoint presentation on LINUX scheduling and input and output operations . source of information is operating system concepts 8th edition by abraham silberschatz peter b . galvin greg gagne
[Container Plumbing Days 2023] Why was nerdctl made?Akihiro Suda
nerdctl (contaiNERD CTL) was made to facilitate development of new technologies in the containerd platform.
Such technologies include:
- Lazy-pulling with Stargz/Nydus/OverlayBD
- P2P image distribution with IPFS
- Image encryption with OCIcrypt
- Image signing with Cosign
- “Real” read-only mounts with mount_setattr
- Slirp-less rootless containers with bypass4netns
- Interactive debugging of Dockerfiles, with buildg
nerdctl is also useful for debugging Kubernetes nodes that are running containerd.
Through this session, the audiences will learn these functionalities of nerdctl, relevant projects, and the roadmap for the future.
https://containerplumbing.org/sessions/2023/why_was_nerdctl_
Docker Networking with New Ipvlan and Macvlan DriversBrent Salisbury
Docker Networking presentation at ONS2016.
Docker Macvlan and Ipvlan Networking Drivers Experimental Readme:
github.com/docker/docker/blob/master/experimental/vlan-networks.md
Kernel requirements for Ipvlan mode is v4.2+, Macvlan mode is v3.19.
If using Virtualbox to test with, use NAT mode interfaces unless you have multiple MAC addresses working in your setup. Use the 172.x.x.x subnet and gateway used by the VBox NAT network. Vmware Fusion works out of the box.
Here is a screenshot of a VirtualBox NAT interface:
https://www.dropbox.com/s/w1rf61n18y7q4f1/Screenshot%202016-03-20%2001.55.13.png?dl=0
The TC Flower Classifier allows control of packets based on flows determined by matching of well-known packet fields and metadata. This is inspired by similar flow classification described by OpenFlow and implemented by Open vSwitch. Offload of the TC Flower classifier and related modules provides a powerful mechanism to both increase throughput and reduce CPU utilisation for users of such flow-based systems. This presentation will give an overview of the evolution of offload of the TC Flower classifier: where it came from, the current status and possible future directions.
netfilter is a framework provided by the Linux kernel that allows various networking-related operations to be implemented in the form of customized handlers.
iptables is a user-space application program that allows a system administrator to configure the tables provided by the Linux kernel firewall (implemented as different netfilter modules) and the chains and rules it stores.
Many systems use iptables/netfilter, Linux's native packet filtering/mangling framework since Linux 2.4, be it home routers or sophisticated cloud network stacks.
In this session, we will talk about the netfilter framework and its facilities, explain how basic filtering and mangling use-cases are implemented using iptables, and introduce some less common but powerful extensions of iptables.
Shmulik Ladkani, Chief Architect at Nsof Networks.
Long time network veteran and kernel geek.
Shmulik started his career at Jungo (acquired by NDS/Cisco) implementing residential gateway software, focusing on embedded Linux, Linux kernel, networking and hardware/software integration.
Some billions of forwarded packets later, Shmulik left his position as Jungo's lead architect and joined Ravello Systems (acquired by Oracle) as tech lead, developing a virtual data center as a cloud-based service, focusing around virtualization systems, network virtualization and SDN.
Recently he co-founded Nsof Networks, where he's been busy architecting network infrastructure as a cloud-based service, gazing at internet routes in astonishment, and playing the chkuku.
Kubernetes networking: Introduction to overlay networks, communication models...Murat Mukhtarov
This talk was given during Kubernetes Meetup in Melbourne on 26 April 2016. In this presentation we provide a quick overview of overlay networking concept, introduction into Linux namespaces and comparison between Kubernetes and Docker networking models. Implementation example based on Flannel network presented as well.
LinuxCon 2015 Linux Kernel Networking WalkthroughThomas Graf
This presentation features a walk through the Linux kernel networking stack for users and developers. It will cover insights into both, existing essential networking features and recent developments and will show how to use them properly. Our starting point is the network card driver as it feeds a packet into the stack. We will follow the packet as it traverses through various subsystems such as packet filtering, routing, protocol stacks, and the socket layer. We will pause here and there to look into concepts such as networking namespaces, segmentation offloading, TCP small queues, and low latency polling and will discuss how to configure them.
Video: https://www.youtube.com/watch?v=JRFNIKUROPE . Talk for linux.conf.au 2017 (LCA2017) by Brendan Gregg, about Linux enhanced BPF (eBPF). Abstract:
A world of new capabilities is emerging for the Linux 4.x series, thanks to enhancements that have been included in Linux for to Berkeley Packet Filter (BPF): an in-kernel virtual machine that can execute user space-defined programs. It is finding uses for security auditing and enforcement, enhancing networking (including eXpress Data Path), and performance observability and troubleshooting. Many new open source tools that have been written in the past 12 months for performance analysis that use BPF. Tracing superpowers have finally arrived for Linux!
For its use with tracing, BPF provides the programmable capabilities to the existing tracing frameworks: kprobes, uprobes, and tracepoints. In particular, BPF allows timestamps to be recorded and compared from custom events, allowing latency to be studied in many new places: kernel and application internals. It also allows data to be efficiently summarized in-kernel, including as histograms. This has allowed dozens of new observability tools to be developed so far, including measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more.
This talk will summarize BPF capabilities and use cases so far, and then focus on its use to enhance Linux tracing, especially with the open source bcc collection. bcc includes BPF versions of old classics, and many new tools, including execsnoop, opensnoop, funcccount, ext4slower, and more (many of which I developed). Perhaps you'd like to develop new tools, or use the existing tools to find performance wins large and small, especially when instrumenting areas that previously had zero visibility. I'll also summarize how we intend to use these new capabilities to enhance systems analysis at Netflix.
Introduction to BeRTOS, real time embedded operating system open source. BeRTOS is free also for commercial projects or closed source applications.
http://www.bertos.org/download/
Unifying Network Filtering Rules for the Linux Kernel with eBPFNetronome
At the core of fast network packet processing lies the ability to filter packets, or in other words, to apply a set of rules on packets, usually consisting of a pattern to match (L2 to L4 source and destination addresses and ports, protocols, etc.) and corresponding actions (redirect to a given queue, or drop the packet, etc.). Over the years, several filtering frameworks have been added to Linux. While at the lower level, ethtool can be used to configure N-tuple rules on the receive side for the hardware, the upper layers of the stack got equipped with rules for firewalling (Netfilter), traffic shaping (TC), or packet switching (Open vSwitch for example).
In this presentation, Quentin Monnet reviewed the needs for those filtering frameworks and the particularities of each one. Then focuses on the changes brought by eBPF and XDP in this landscape: as BPF programs allow for very flexible processing and can be attached very low in the stack—at the driver level, or even run on the NIC itself—they offer filtering capabilities with no precedent in terms of performance and versatility in the kernel. Lastly, the third part explores potential leads in order to create bridges between the different rule formats and to make it easier for users to build their filtering eBPF programs.
pycon apac 2013 presentation
http://apac-2013.pycon.jp/ja/program/sessions.html#session-14-1110-rooma0762-en2-ja
videos are available at
http://www.youtube.com/watch?v=Ow-aXpMO8-o
Deeper Dive in Docker Overlay NetworksDocker, Inc.
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies. Finally, it will show how we can dynamically distribute IP and MAC information to every hosts in the overlay.
[Container Plumbing Days 2023] Why was nerdctl made?Akihiro Suda
nerdctl (contaiNERD CTL) was made to facilitate development of new technologies in the containerd platform.
Such technologies include:
- Lazy-pulling with Stargz/Nydus/OverlayBD
- P2P image distribution with IPFS
- Image encryption with OCIcrypt
- Image signing with Cosign
- “Real” read-only mounts with mount_setattr
- Slirp-less rootless containers with bypass4netns
- Interactive debugging of Dockerfiles, with buildg
nerdctl is also useful for debugging Kubernetes nodes that are running containerd.
Through this session, the audiences will learn these functionalities of nerdctl, relevant projects, and the roadmap for the future.
https://containerplumbing.org/sessions/2023/why_was_nerdctl_
Docker Networking with New Ipvlan and Macvlan DriversBrent Salisbury
Docker Networking presentation at ONS2016.
Docker Macvlan and Ipvlan Networking Drivers Experimental Readme:
github.com/docker/docker/blob/master/experimental/vlan-networks.md
Kernel requirements for Ipvlan mode is v4.2+, Macvlan mode is v3.19.
If using Virtualbox to test with, use NAT mode interfaces unless you have multiple MAC addresses working in your setup. Use the 172.x.x.x subnet and gateway used by the VBox NAT network. Vmware Fusion works out of the box.
Here is a screenshot of a VirtualBox NAT interface:
https://www.dropbox.com/s/w1rf61n18y7q4f1/Screenshot%202016-03-20%2001.55.13.png?dl=0
The TC Flower Classifier allows control of packets based on flows determined by matching of well-known packet fields and metadata. This is inspired by similar flow classification described by OpenFlow and implemented by Open vSwitch. Offload of the TC Flower classifier and related modules provides a powerful mechanism to both increase throughput and reduce CPU utilisation for users of such flow-based systems. This presentation will give an overview of the evolution of offload of the TC Flower classifier: where it came from, the current status and possible future directions.
netfilter is a framework provided by the Linux kernel that allows various networking-related operations to be implemented in the form of customized handlers.
iptables is a user-space application program that allows a system administrator to configure the tables provided by the Linux kernel firewall (implemented as different netfilter modules) and the chains and rules it stores.
Many systems use iptables/netfilter, Linux's native packet filtering/mangling framework since Linux 2.4, be it home routers or sophisticated cloud network stacks.
In this session, we will talk about the netfilter framework and its facilities, explain how basic filtering and mangling use-cases are implemented using iptables, and introduce some less common but powerful extensions of iptables.
Shmulik Ladkani, Chief Architect at Nsof Networks.
Long time network veteran and kernel geek.
Shmulik started his career at Jungo (acquired by NDS/Cisco) implementing residential gateway software, focusing on embedded Linux, Linux kernel, networking and hardware/software integration.
Some billions of forwarded packets later, Shmulik left his position as Jungo's lead architect and joined Ravello Systems (acquired by Oracle) as tech lead, developing a virtual data center as a cloud-based service, focusing around virtualization systems, network virtualization and SDN.
Recently he co-founded Nsof Networks, where he's been busy architecting network infrastructure as a cloud-based service, gazing at internet routes in astonishment, and playing the chkuku.
Kubernetes networking: Introduction to overlay networks, communication models...Murat Mukhtarov
This talk was given during Kubernetes Meetup in Melbourne on 26 April 2016. In this presentation we provide a quick overview of overlay networking concept, introduction into Linux namespaces and comparison between Kubernetes and Docker networking models. Implementation example based on Flannel network presented as well.
LinuxCon 2015 Linux Kernel Networking WalkthroughThomas Graf
This presentation features a walk through the Linux kernel networking stack for users and developers. It will cover insights into both, existing essential networking features and recent developments and will show how to use them properly. Our starting point is the network card driver as it feeds a packet into the stack. We will follow the packet as it traverses through various subsystems such as packet filtering, routing, protocol stacks, and the socket layer. We will pause here and there to look into concepts such as networking namespaces, segmentation offloading, TCP small queues, and low latency polling and will discuss how to configure them.
Video: https://www.youtube.com/watch?v=JRFNIKUROPE . Talk for linux.conf.au 2017 (LCA2017) by Brendan Gregg, about Linux enhanced BPF (eBPF). Abstract:
A world of new capabilities is emerging for the Linux 4.x series, thanks to enhancements that have been included in Linux for to Berkeley Packet Filter (BPF): an in-kernel virtual machine that can execute user space-defined programs. It is finding uses for security auditing and enforcement, enhancing networking (including eXpress Data Path), and performance observability and troubleshooting. Many new open source tools that have been written in the past 12 months for performance analysis that use BPF. Tracing superpowers have finally arrived for Linux!
For its use with tracing, BPF provides the programmable capabilities to the existing tracing frameworks: kprobes, uprobes, and tracepoints. In particular, BPF allows timestamps to be recorded and compared from custom events, allowing latency to be studied in many new places: kernel and application internals. It also allows data to be efficiently summarized in-kernel, including as histograms. This has allowed dozens of new observability tools to be developed so far, including measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more.
This talk will summarize BPF capabilities and use cases so far, and then focus on its use to enhance Linux tracing, especially with the open source bcc collection. bcc includes BPF versions of old classics, and many new tools, including execsnoop, opensnoop, funcccount, ext4slower, and more (many of which I developed). Perhaps you'd like to develop new tools, or use the existing tools to find performance wins large and small, especially when instrumenting areas that previously had zero visibility. I'll also summarize how we intend to use these new capabilities to enhance systems analysis at Netflix.
Introduction to BeRTOS, real time embedded operating system open source. BeRTOS is free also for commercial projects or closed source applications.
http://www.bertos.org/download/
Unifying Network Filtering Rules for the Linux Kernel with eBPFNetronome
At the core of fast network packet processing lies the ability to filter packets, or in other words, to apply a set of rules on packets, usually consisting of a pattern to match (L2 to L4 source and destination addresses and ports, protocols, etc.) and corresponding actions (redirect to a given queue, or drop the packet, etc.). Over the years, several filtering frameworks have been added to Linux. While at the lower level, ethtool can be used to configure N-tuple rules on the receive side for the hardware, the upper layers of the stack got equipped with rules for firewalling (Netfilter), traffic shaping (TC), or packet switching (Open vSwitch for example).
In this presentation, Quentin Monnet reviewed the needs for those filtering frameworks and the particularities of each one. Then focuses on the changes brought by eBPF and XDP in this landscape: as BPF programs allow for very flexible processing and can be attached very low in the stack—at the driver level, or even run on the NIC itself—they offer filtering capabilities with no precedent in terms of performance and versatility in the kernel. Lastly, the third part explores potential leads in order to create bridges between the different rule formats and to make it easier for users to build their filtering eBPF programs.
pycon apac 2013 presentation
http://apac-2013.pycon.jp/ja/program/sessions.html#session-14-1110-rooma0762-en2-ja
videos are available at
http://www.youtube.com/watch?v=Ow-aXpMO8-o
Deeper Dive in Docker Overlay NetworksDocker, Inc.
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies. Finally, it will show how we can dynamically distribute IP and MAC information to every hosts in the overlay.
12. http://hyeonseok.com
지역화 진행
Translate project
๏ 위, 아래 번역을 참고하여 문맥을 파악합니다.
๏ 마침표(.)로 끝나면 서술형으로 적고 마침표가 없으면 명사형으로
적습니다.
๏ HTML 태그나 %s와 같은 예약어는 그대로 둡니다.
다른 사용자가 제안한 번역을 볼 수 있습니다.
모호함을 선택하면 다른 사용자
가 한번 더 검토할 수 있습니다.
영문(또는 계정 설정에서 선택한 다른언어)을 참고하여
번역을 제안합니다.
이전, 다음을 눌러서 다른 번역
을 제안할 수 있습니다.
13. http://hyeonseok.com
번역 이후 과정
After translation
๏ 제안한 번역은 리뷰어에 의해서 검토와 승인을 거쳐서 사이트에 반
영됩니다.
๏ 현재 리뷰어
- Channy Yun
- Kenu
- Hyeonseok Shin
๏ 여러분의 많은 참여로 새로운 리뷰어가 나오기를 기대합니다!
16. http://hyeonseok.com
Mozilla 고객 지원 커뮤니티
Mozilla support(SUMO)
๏ 쉽게 참여하나 많이 참여 안 하는 곳
- 국제적으로 제일 활발한 커뮤니티 중 하나
๏ SUMO 프로젝트 활동
- Mozilla Support 사이트 번역
- Firefox 제품별 도움말 번역
- Firefox, Mobile, Thunderbird
- Mozilla 커뮤니티 제품별 Q&A 게시판 운영
๏ 참여 방법
- 도움말 문서 번역을 시작으로 활동 참여 가능
22. http://hyeonseok.com
번역할 문서 선택
Choose Document
๏ 한국어로 번역이 안됐거나 갱신이 필요한 문서를 골라야 합니다.
https://support.mozilla.org/ko/localization/most-visited-translations
23. http://hyeonseok.com
번역할 문서 선택
Choose Document
๏ 선택한 문서는 우선 제목과 일부 내용만 번역하여 검토 신청을 함으
로서 다른 사람 작업과 중복되지 않도록 해야 합니다.
๏ 각 문서 마다 "수정 기록"을 누르면 현재 문서의 상태를 알 수 있습
니다. 검토 안됨 상태라면 누군가가 작업을 하고 있을 가능성이 큽
니다. 따라서, 다른 문서를 선택해서 작업해야 합니다.
24. http://hyeonseok.com
번역이 필요한 문서 작업하기
Translate document
๏ "번역이 필요함"이라는 문서를 선택합니다.
๏ 문서 제목을 번역합니다. 가급적 명료한 명사구로 번역합니다.
- 한국어 제목을 고치면 자동으로 줄임 제목이 나오나 영문을 그대로 씁니다!
26. http://hyeonseok.com
번역이 필요한 문서 작업하기
Translate document
๏ 영문 버전을 기초로 한국어 번역을 시작합니다. 문서는 기본적으로
WIKI 문법을 이용하고 있어서 수정하면 안 되는 구문이 있습니다.
27. http://hyeonseok.com
번역이 필요한 문서 작업하기
WIKI Markup
중괄호 { }는 조건문 입니다. 그대로 둡니다. 대괄호 [[ ]] 안의 문서명은 수정이 필요할 경우 [[영
문 문서명|한글 번역]]의 형식으로 입력합니다.
대괄호 두개 [[ ]]는 다른 문서로의 링크 입니다. 영
문 문서명을 번역하지 않고 그대로 둡니다.
WIKI 문법은 스타일 속성이므로 그대로 둡니다.
중괄호 { }내 속성인 button, menu 등은 스타일
입니다. 그 뒤는 번역해도 되지만 button, menu
등은 번역하지 않고 그대로 둡니다.
28. http://hyeonseok.com
번역이 필요한 문서 작업하기
WIKI Markup
๏ 마크업 소개 페이지에서
도움말 문서에서 사용하
는 번역하면 안되는 마크
업을 익힐 수 있습니다.
- 제목1 ( =제목1=)
- 문서 목차 ( __TOC__ )
- 숫자 목록 (# 항목1)
- 문자 목록 (* 항목 1)
- 단축키 {key Ctrl+T}
- ….
https://support.mozilla.com/ko/kb/markup-chart
29. http://hyeonseok.com
번역이 필요한 문서 작업하기
Request review
๏ 문서 번역 결과를 미리 보거나 저장할 수 있습니다. 검토 요청은 임
시 저장으로 사용할 수 있습니다. 자주 저장하는 습관을 들여야 합
니다.
30. http://hyeonseok.com
번역이 필요한 문서 작업하기
Request review
๏ 검토 요청을 누르면 변경 사항 보내기 창에서 변경 내역을 간단히
적고 제출을 누르면 임시 저장 및 검토 상태로 변경됩니다. 검토 요
청은 한국어 문서 리뷰어인 Channy님게 전달되어 문서 검토 후 최
종 게재됩니다. 그 전에는 영문 문서가 그대로 보일 수 있습니다.
32. http://hyeonseok.com
수정이 필요한 문서 작업하기
Translate document
๏ 수정 필요 문서는 기본적으로 영문 문서의 변경 사항을 살펴보고 이
를 한국어 문서에 변경사항을 추가해 주는 작업입니다.
๏ Toggle(토글) 버튼을 누르면 변경내역이 나타납니다.
33. http://hyeonseok.com
일본어 문서 참고하기
Translate Japanese
๏ Mozilla 일본 커뮤니티는 한국 보다 더 활발하여 많은 문서가 이미
번역되어 있고 간단한 작업으로 쉽게 한국어로 번역을 옮길 수 있습
니다.
- 일본어 문서는 한국어 문서명에 ko와 ja를 바꾸기만 하면 찾을 수 있습니다.
- https://support.mozilla.org/ko/kb/.../translate
- https://support.mozilla.org/ja/kb/.../translate
36. http://hyeonseok.com
일본어 문서 참고하기
Translate Japanese
๏ Mozilla 용어 변환기로 2차
번역을 합니다.
๏ 변환 용어 목록에 따라 변환을
거친 것을 조사 및 표현 등을
부드럽게 바꾸어 한국어 번역
을 완료합니다.
http://channy.creation.net/project/firefox/ja2ko/
37. http://hyeonseok.com
번역 이후 과정
After translation
๏ 검토 과정
- 여러분이 번역한 문서는 검토(Review) 후 제공 되므로 번역이 완료된 후에
바로 보이지 않을 수 있습니다.
- 검토 그룹에 들어가려면 일정 이상의 공헌이 있은 다음, 글로벌 커뮤니티에서
인증이 되어야 합니다. 꾸준히 하시면 여러분도 그 일원이 되실 수 있습니다.
๏ 한국 커뮤니티
- 궁금하신 점은 한국 Mozilla 커뮤니티 메일링에 가입하시고 의견을 나누어
주세요!
- https://groups.google.com/forum/?fromgroups#!forum/
mozillakorea