More Related Content Similar to Hadoop 20111215 Similar to Hadoop 20111215 (20) Hadoop 201112152. • Hadoop Core, our flagship sub-
project, provides a distributed filesystem
(HDFS) and support for the MapReduce
distributed computing metaphor.
• Pig is a high-level data-flow language and
execution framework for parallel computation.
It is built on top of Hadoop Core.
3. ZooKeeper
• ZooKeeper is a highly available and reliable
coordination system. Distributed applications
use ZooKeeper to store and mediate updates
for critical shared state.
4. JobTracker
• JobTracker: The JobTracker provides command
and control for job management. It supplies
the primary user interface to a MapReduce
cluster. It also handles the distribution and
management of tasks. There is one instance of
this server running on a cluster. The machine
running the JobTracker server is the
MapReduce master.
5. TaskTracker
• TaskTracker: The TaskTracker provides
execution services for the submitted jobs.
Each TaskTracker manages the execution of
tasks on an individual compute node in the
MapReduce cluster. The JobTracker manages
all of the TaskTracker processes. There is one
instance of this server per compute node.
6. NameNode
• NameNode: The NameNode provides metadata
storage for the shared file system. The
NameNode supplies the primary user interface to
the HDFS. It also manages all of the metadata for
the HDFS. There is one instance of this server
running on a cluster. The metadata includes such
critical information as the file directory structure
and which DataNodes have copies of the data
blocks that contain each file’s data. The machine
running the NameNode server process is the
HDFS master.
7. Secondary NameNode
• Secondary NameNode: The secondary
NameNode provides both file system metadata
backup and metadata compaction. It supplies
near real-time backup of the metadata for the
NameNode. There is at least one instance of this
server running on a cluster, ideally on a separate
physical machine than the one running the
NameNode. The secondary NameNode also
merges the metadata change history, the edit
log, into the NameNode’s file system image.
8. Design of HDFS
• Design of HDFS
– Very large files
– Streaming data access
– Commodity hardware
• not a good fit
– Low-latency data access
– Lots of small files
– Multiple writers, arbitrary file modifications
14. DistCp分布式复制
• hadoop distcp -update hdfs://namenode1/foo
hdfs://namenode2/bar
• hadoop distcp –update ……
– 只更新修改过的文件
• hadoop distcp –overwrite ……
– 覆盖
• hadoop distcp –m 100 ……
– 复制任务被分成N个MAP执行
15. Hadoop 文件归档
• Har文件
• Hadoop archive –archiveName file.har
/myfiles /outpath
• Hadoop fs –ls /outpath/file.har
• Hadoop fs –lsr har:///outpath/file.har
16. 文件操作
• Hadoop fs –rm hdfs://192.168.126.133:9000/xxx
•cat •cp •lsr •rmr
•chgrp •du •mkdir •setrep
•chmod •dus •moveFromLocal •stat
•chown •expunge •moveToLocal •tail
•copyFromLocal •get •mv •test
•copyToLocal •getmerge •put •text
•count •ls •rm •touchz
19. 分布式部署Core-site.xml
(master&slave相同)
• <configuration>
• <property>
• <name>hadoop.tmp.dir</name>
• <value>/home/tony/tmp/tmp</value>
• <description>Abaseforothertemporarydirectories.</description>
• </property>
• <property>
• <name>fs.default.name</name>
• <value>hdfs://192.168.0.10:9000</value>
• </property>
• </configuration>
20. 分布式部署Hdfs-site.xml
(master&slave)
• <configuration>
• <property>
• <name>dfs.replication</name>
• <value>1</value>
• </property>
• <property>
• <name>dfs.name.dir</name>
• <value>/home/tony/tmp/name</value>
• </property>
• <property>
• <name>dfs.data.dir</name>
• <value>/home/tony/tmp/data</value>
• </property>
• </configuration>
• 并且保证当前机器有该目录
22. Run
• Hadoop namenode –format
– 每次fotmat前,先stop-all,并清空tmp一下的
所有目录
• Start-all.sh 或 (start-dfs和start-mapred)
• 显示运行情况:
– http://192.168.0.20:50070/dfshealth.jsp
– 或 hadoop dfsadmin -report
25. could only be replicated
• java.io.IOException: could only be replicated
to 0 nodes, instead of 1.
• 解决:
– XML的配置不正确,要保证slave的mapred-
site.xml和core-site.xml的地址都跟master一致
26. Incompatible namespaceIDs
• java.io.IOException: Incompatible
namespaceIDs in /home/hadoop/data:
namenode namespaceID = 1214734841;
datanode namespaceID = 1600742075
• 原因:
– 格式化前没清空tmp,导致ID不一致
• 解决:
– 修改 namenode 的
/home/hadoop/name/current/VERSION
28. Name node is in safe mode
• hadoop dfsadmin -safemode leave
• safemode模式
NameNode在启动的时候首先进入安全模式,如果datanode丢失的block达到一定的比例(1-
dfs.safemode.threshold.pct),则系统会一直处于安全模式状态即只读状态。
dfs.safemode.threshold.pct(缺省值0.999f)表示HDFS启动的时候,如果DataNode上报的block
个数达到了元数据记录的block个数的0.999倍才可以离开安全模式,否则一直是这种只读模
式。如果设为1则HDFS永远是处于SafeMode。
下面这行摘录自NameNode启动时的日志(block上报比例1达到了阀值0.9990)
The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off
automatically in 18 seconds.
hadoop dfsadmin -safemode leave
有两个方法离开这种安全模式
1. 修改dfs.safemode.threshold.pct为一个比较小的值,缺省是0.999。
2. hadoop dfsadmin -safemode leave命令强制离开
• 用户可以通过dfsadmin -safemode value 来操作安全模式,参数value的说明如下:
enter - 进入安全模式
leave - 强制NameNode离开安全模式
get - 返回安全模式是否开启的信息
wait - 等待,一直到安全模式结束。
29. error in shuffle in fetcher
• org.apache.hadoop.mapreduce.task.reduce.Sh
uffle$ShuffleError: error in shuffle in fetcher
• 解决方式:
– 问题出在hosts文件的配置上,在所有节点的
/etc/hosts文件中加入其他节点的主机名和IP映
射
36. 执行 MapReduce
• hadoop jar a.jar com.Map1
hdfs://192.168.126.133:9000/hadoopconf/
hdfs://192.168.126.133:9000/output2/
• 状态:
• http://localhost:50030/
37. Read From Hadoop URL
• //execute: hadoop ReadFromHDFS
• public class ReadFromHDFS {
• static {
• URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
• }
• public static void main(String[] args){
• try {
• URL uri = new URL("hdfs://192.168.126.133:9000/t1/a1.txt");
• IOUtils.copyBytes(uri.openStream(), System.out, 4096, false);
• }catch (FileNotFoundException e) {
• e.printStackTrace();
• } catch (IOException e) {
• e.printStackTrace();
• }
• }
• }
38. Read By FileSystem API
• //execute : hadoop ReadByFileSystemAPI
• public class ReadByFileSystemAPI {
• public static void main(String[] args) throws Exception {
• String uri = ("hdfs://192.168.126.133:9000/t1/a2.txt");;
• Configuration conf = new Configuration();
• FileSystem fs = FileSystem.get(URI.create(uri), conf);
• FSDataInputStream in = null;
• try {
• in = fs.open(new Path(uri));
• IOUtils.copyBytes(in, System.out, 4096, false);
• } finally {
• IOUtils.closeStream(in);
• }
• }
• }
39. FileSystemAPI
• Path path = new Path(URI.create("hdfs://192.168.126.133:9000/t1/tt/"));
• if(fs.exists(path)){
• fs.delete(path,true);
• System.out.println("deleted-----------");
• }else{
• fs.mkdirs(path);
• System.out.println("creted=====");
• }
• /**
• * List files
• */
• FileStatus[] fileStatuses = fs.listStatus(new Path(URI.create("hdfs://192.168.126.133:9000/")));
• for(FileStatus fileStatus : fileStatuses){
• System.out.println("" + fileStatus.getPath().toUri().toString() + " dir:" + fileStatus.isDirectory());
• }
• PathFilter pathFilter = new PathFilter(){
• @Override
• public boolean accept(Path path) {
• return true;
• }
• };
40. 文件写入策略
• 在创建一个文件之后,在文件系统的命名空间中是可见的,如下所示:
• 1. Path p = new Path("p");
• 2. Fs.create(p);
• 3. assertThat(fs.exists(p),is(true));
• 但是,写入文件的内容并不保证能被看见,即使数据流已经被刷新。所以文
件长度显
• 示为0:
• 1. Path p = new Path("p");
• 2. OutputStream out = fs.create(p);
• 3. out.write("content".getBytes("UTF-8"));
• 4. out.flush();
• 5. assertThat(fs.getFileStatus(p).getLen(),is(0L));
• 一旦写入的数据超过一个块的数据,新的读取者就能看见第一个块。对于之
后的块也
• 是这样。总之,它始终是当前正在被写入的块,其他读取者是看不见它的。
• out.sync(); 强制同步, close()的时候会自动调用sync()
41. 集群复制 归档
• hadoop distcp -update hdfs://n1/foo
hdfs://n2/bar/foo
• 归档
– hadoop archive -archiveName files.har /my/files
/my
• 使用归档
– hadoop fs -lsr har:///my/files.har
– hadoop fs -lsr har://hdfs-localhost:8020/my/files.har/my/files/di
• 归档缺点:修改文件、增加删除文件 都需重新归档
42. SequenceFile Reader&Writer
• Configuration conf = new Configuration();
• SequenceFile.Writer writer =null ;
• try {
• System.out.println("start....................");
• FileSystem fileSystem = FileSystem.newInstance(conf);
• IntWritable key = new IntWritable(1);
• Text value = new Text("");
• Path path = new Path("hdfs://192.168.126.133:9000/t1/seq");
• if(!fileSystem.exists(path)){
• fileSystem.create(path);
• writer = SequenceFile.createWriter(fileSystem, conf, path, key.getClass(), value.getClass());
• for(int i=1; i<10; i++){
• writer.append(new IntWritable(i), new Text("value" + i));
• }
• writer.close();
• }else{
• SequenceFile.Reader reader = new SequenceFile.Reader(fileSystem,path,conf);
• System.out.println("now while segment");
• while(reader.next(key, value)){
• System.out.println("key:" + key.get() + " value:" + value + " position" + reader.getPosition());
• };
• }
• } catch (IOException e) {
• e.printStackTrace();
• } finally{
• IOUtils.closeStream(writer);
• }
43. SequenceFile
• 1 value1
• 2 value2
• 3 value3
• 4 value4
• 5 value5
• 6 value6
• 7 value7
• 8 value8
• 9 value9
• 包括一个Key 和一个 Value
• 可以用hadoop fs –text hdfs://……… 来显示文件
44. SequenceMap
• 重建索引:
MapFile.fix(fileSystem, path, key.getClass(), value.
getClass(), true, conf);
• MapFile.Writer writer = new
MapFile.Writer(conf, fileSystem, path.toString(), k
ey.getClass(), value.getClass());
• MapFile.Reader reader = new
MapFile.Reader(fileSystem,path.toString(),conf);
45. Mapper Test Case
• @Test
• public void testMapper1() throws IOException {
• MyMapper myMapper = new MyMapper();
• Text text = new Text("xxxxxx<<HelloWorld>>xxxxxxxxxxxxxxxxxx");
• OutputCollector outputCollector = new OutputCollector<Text,IntWritable>(){
• public void collect(Text resultKey, IntWritable resultValue) throws IOException {
• System.out.println("resultKey:" + resultKey + " resultValue:" + resultValue);
• Assert.assertTrue("HelloWorld" . equals(resultKey.toString()));
• }
• };
• myMapper.map(null,text, outputCollector, null);
• }
• public class MyMapper extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
• @Override
• public void map(LongWritable longWritable, Text text, OutputCollector<Text, IntWritable>
textIntWritableOutputCollector, Reporter reporter) throws IOException {
• Text result = new Text(text.toString().split("<<")[1].split(">>")[0]);
• textIntWritableOutputCollector.collect(result, new IntWritable(result.getLength()));
• }
• }
46. Mapper Test Case
• @Test
• public void testReducer1() throws IOException {
• MyReducer myReducer = new MyReducer();
• ArrayList arrayList = new ArrayList();
• arrayList.add(new Text("a1")); arrayList.add(new Text("a222")); arrayList.add(new Text("a33"));
• Iterator it = arrayList.iterator();
• OutputCollector<Text,Text> outputCollector = new OutputCollector<Text,Text>(){
• public void collect(Text resultKey, Text resultValue) throws IOException {
• System.out.println("resultKey:" + resultKey + " resultValue:" + resultValue);
• Assert.assertTrue(resultKey.toString().equals("a222"));
• }
• };
• myReducer.reduce(null,it,outputCollector,null);
• }
• public class MyReducer extends MapReduceBase implements Reducer<Text, Text, Text, Text> {
• public void reduce(Text key, Iterator<Text> values, OutputCollector<Text, Text> output, Reporter reporter) throws IOException {
• int sum = 0;
• Text t = new Text();
• while (values.hasNext()) {
• Text tmp = values.next();
• if (tmp.getLength() > t.getLength()) {
• t = tmp;
• }
• }
• output.collect(key, t);
• }
• }
47. Map Reduce 执行原理
• JobClient.submitJob()
• 1.向JobTracker申请一个任务ID
• 2.检查作业的输入输出目录是否存在或已存在
• 3.计算作业的输入划分,如果目录不存在就把
错误返回给MapReduce程序
• 4.把作业运行的资源复制到JobTracker服务器的
目录
• 5.通知jobtracker运行
48. Mapper输入 使用多种InputFormat
//match inputFormat by input path
• MultipleInputs.addInputPath(conf, new
Path(args[0]), KeyValueTextInputFormat.class, KVTempMapper.class);
• MultipleInputs.addInputPath(conf, new
Path("hdfs://192.168.126.133:9000/*.txt"), TextInputFormat.class, KVTempMapper.class);
50. 自义定输出目录
• public class MyOutputFormat extends
MultipleTextOutputFormat {
• protected String generateFileNameForKeyValue(Object
key, Object value, String name) {
• return "abc.txt";
• }
• }
• 运行时:
conf.setOutputFormat(MyOutputFormat.class);
• 最后输出会输出到 目录的abc.txt文件
51. 设置多个输出Format
• MultipleOutputs.addNamedOutput(conf, "outputA", TextOutputFormat.class, Long
Writable.class, Text.class);
•
MultipleOutputs.addNamedOutput(conf, "outputB", MyOutputFormat.class, Long
Writable.class, Text.class);
• MultipleOutputs 用完最后一定要关闭
• 需要覆写configure来获取JobConf
– 代码在备注里
52. 记数器
• mapper或reducer中
– reporter.incrCounter(CounterType.Success, 1);
– reporter.incrCounter("myGroup","name", 2);
• 作业完成时会打印出计数
• 程序获取Counter:
• RunningJob runningJob = JobClient.runJob(conf);
• JobClient jobClient = new JobClient(conf);
• Counters.Counter counter =
runningJob.getCounters().findCounter("myGroup", "counterA");
• if(counter!=null){
• System.out.println(counter.getCounter());
• }
53. 排序 & Join
• conf.setOutputFormat(MapFileOutputFormat.
class);
55. Pig syntax
• raw = LOAD 'excite.log' 读取一个文件
– USING PigStorage('t') 分隔符
– AS (user:int, time:int, query:int); 变量及类型
• register XXX.jar 使用JAR包
• dump raw
• describe raw 打印结构
• explain raw
• store raw into 'aaa.txt' 保存
57. Pig syntax
• filter
– ccc = filter aaa by name is null and age>10
• Group
– bbb = group aaa by myColumn
• Foreach&Generate
– ddd = foreach bbb generate group, MAX(aaa.temp)
• Illustrate
– ILLUSTRATE aaa 打印步骤
58. Pig 内置函数
• split XXX into a1 if temp is not null, a2 if temp
is null
• PIG内置函数:
– AVG, CONCAT, COUNT, DIFF, MAX, SIZE, SUM, TOK
ENIZE
– IsEmpty
– PigStorage
59. Foreach
• data:
– a, 1, hello
– b, 2, hey
• execute:
– foreach XXX generate $2, $1+10, $0
• result:
– hello, 11, a
– hey, 12, b
60. 自定义函数 UDF Filter
• filter XXX by isGood(year)
• public class GoodPig extends FilterFunc{
• public Boolean exec(Tuple tuple) ;
• }
• 使用:
• define isGood pig.GoodPig
61. 自定义Pig函数 改变类型
• public class MyEvalFunc extends EvalFunc{
public List<FuncSpec> getArgToFuncMapping()
}
使用:
define myEvalFunc com.MyEvalFunc
foreach XXX generate myEvalFunc(aaa)
62. MyLoadFunc与存储的处理
• store XXX into 'out.txt' using PigStorage('==')
– 输入: Hello==1==a
• 自定义LoadFunc
– a1 = load 'xxx.txt' using com.MyLoadFunc() as
(year:int, temp:int)
– 代码在备注
63. Pig的Join
• aaa:
– 1,hi
– 2,hello
– 3,nihao
• bbb:
– a,2
– b,3
– c,1 1 hi c 1
2 hello a 2
– xxx = join aaa by $0, bbb by $1
3 nihao b 3
64. Hive 简介
• 数据仓库,
• 把类似SQL的语法转化成MapReduce程序
• 不支持Index,Transaction,分钟级别的延时
• 不支持SQL的Having
• 数据类型支持
– 基本类型string,int,double,boolean等
– 复杂类型的Array,Map,Struct
65. Hive 数据仓库
• % export HIVE_HOME=/home/my/hive
• 运行:bin/hive
• hive> SHOW TABLES;
• hive -f script.q
• hive -e 'SELECT * FROM dummy'
66. Hive建表与载入数据
• 建表
• CREATE TABLE records
– (year STRING, temperature INT, quality INT)
– ROW FORMAT DELIMITED
– FIELDS TERMINATED BY 't';
• 从文件载入:
– LOAD DATA LOCAL INPATH 'input/ncdc/micro-
tab/sample.txt'
– OVERWRITE INTO TABLE records
67. Error while making MR scratch
directory
• 把hadoop的配置文件core-site.xml中的:
• fs.default.name 里的值改成 hosts里的名称
• 然后重启hadoop和hive
• 如果提示name node is in safe mod
– hadoop dfsadmin -safemode leave
– 或在hdfs上建立相关目录并加权限
– % hadoop fs -mkdir /tmp
– % hadoop fs -chmod a+w /tmp
– % hadoop fs -mkdir /user/hive/warehouse
– % hadoop fs -chmod a+w /user/hive/warehouse
70. 复杂类型
• CREATE TABLE complex (
• col1 ARRAY<INT>,
• col2 MAP<STRING, INT>,
• col3 STRUCT<a:STRING, b:INT, c:DOUBLE>
• );
• 查询:
• SELECT col1[0], col2['b'], col3.c FROM complex
71. 托管表与外部表
managed tables and External tables
• 托管表会移动数据到Hive的数据仓库目录
– CREATE TABLE managed_table (dummy STRING);
– LOAD DATA INPATH '/user/tom/data.txt' INTO table
managed_table;
• 外部表:
– CREATE EXTERNAL TABLE external_table (dummy
STRING) LOCATION '/user/tom/external_table';
– LOAD DATA INPATH '/user/tom/data.txt' INTO TABLE
external_table;
– 删除外部表的时候不会删除数据,只删除metaata
72. Hive Partition分区
• 会按目录保存数据
– /user/hive/warehouse/tab4/level=2/city=beijing/h2.tx
t (红色部分是partition)
• 建表
– CREATE TABLE logs (ts BIGINT, line STRING)
– PARTITIONED BY (dt STRING, country STRING);
• 使用:
– LOAD DATA LOCAL INPATH 'input/hive/partitions/file1'
– INTO TABLE logs
– PARTITION (dt='2001-01-01', country='GB');
73. Hive Buckets
• CREATE TABLE bucketed_users (id INT, name
STRING)
• CLUSTERED BY (id) INTO 4 BUCKETS;
• 分隔成4片,用于拆分成多个MapReduce任
务
74. 分隔符
• CREATE TABLE ...
• ROW FORMAT DELIMITED
• FIELDS TERMINATED BY '001'
• COLLECTION ITEMS TERMINATED BY '002'
• MAP KEYS TERMINATED BY '003'
• LINES TERMINATED BY 'n'
• STORED AS TEXTFILE;
75. 指定序列化反序列化
• CREATE TABLE stations (usaf STRING, wban STRING, name STRING)
• ROW FORMAT SERDE
'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'
• WITH SERDEPROPERTIES (
• "input.regex" = "(d{6}) (d{5}) (.{29}) .*"
• );
• hive> SELECT * FROM stations LIMIT 4;
• 010000 99999 BOGUS NORWAY
• 010003 99999 BOGUS NORWAY
• 010010 99999 JAN MAYEN
• 010013 99999 ROST
76. 表命令
• create table xxx as select name,age from tab2
• ALTER TABLE source RENAME TO target;
• ALTER TABLE target ADD COLUMNS (col3
STRING);
• create table XXX as select c1,c2 from Tab2
77. 自定义函数UDF
• select myFun(age) from tab3;
• public class MyFun extends UDF {
• }
• 编写完以后注册 :
– create temporary function myFun as 'com.MyFun'
79. Hive 简介
• 数据仓库,
• 把类似SQL的语法转化成MapReduce程序
• 不支持Index,Transaction,分钟级别的延时
• 不支持SQL的Having
• 数据类型支持
– 基本类型string,int,double,boolean等
– 复杂类型的Array,Map,Struct
80. Hive 数据仓库
• % export HIVE_HOME=/home/my/hive
• 运行:bin/hive
• hive> SHOW TABLES;
• hive -f script.q
• hive -e 'SELECT * FROM dummy'
81. Hive建表与载入数据
• 建表
• CREATE TABLE records
– (year STRING, temperature INT, quality INT)
– ROW FORMAT DELIMITED
– FIELDS TERMINATED BY 't';
• 从文件载入:
– LOAD DATA LOCAL INPATH 'input/ncdc/micro-
tab/sample.txt'
– OVERWRITE INTO TABLE records
82. Error while making MR scratch
directory
• 把hadoop的配置文件core-site.xml中的:
• fs.default.name 里的值改成 hosts里的名称
• 然后重启hadoop和hive
• 如果提示name node is in safe mod
– hadoop dfsadmin -safemode leave
– 或在hdfs上建立相关目录并加权限
– % hadoop fs -mkdir /tmp
– % hadoop fs -chmod a+w /tmp
– % hadoop fs -mkdir /user/hive/warehouse
– % hadoop fs -chmod a+w /user/hive/warehouse
85. 复杂类型
• CREATE TABLE complex (
• col1 ARRAY<INT>,
• col2 MAP<STRING, INT>,
• col3 STRUCT<a:STRING, b:INT, c:DOUBLE>
• );
• 查询:
• SELECT col1[0], col2['b'], col3.c FROM complex
86. 托管表与外部表
managed tables and External tables
• 托管表会移动数据到Hive的数据仓库目录
– CREATE TABLE managed_table (dummy STRING);
– LOAD DATA INPATH '/user/tom/data.txt' INTO table
managed_table;
• 外部表:
– CREATE EXTERNAL TABLE external_table (dummy
STRING) LOCATION '/user/tom/external_table';
– LOAD DATA INPATH '/user/tom/data.txt' INTO TABLE
external_table;
– 删除外部表的时候不会删除数据,只删除metaata
87. Hive Partition分区
• 会按目录保存数据
– /user/hive/warehouse/tab4/level=2/city=beijing/h2.tx
t (红色部分是partition)
• 建表
– CREATE TABLE logs (ts BIGINT, line STRING)
– PARTITIONED BY (dt STRING, country STRING);
• 使用:
– LOAD DATA LOCAL INPATH 'input/hive/partitions/file1'
– INTO TABLE logs
– PARTITION (dt='2001-01-01', country='GB');
88. Hive Buckets
• CREATE TABLE bucketed_users (id INT, name
STRING)
• CLUSTERED BY (id) INTO 4 BUCKETS;
• 分隔成4片,用于拆分成多个MapReduce任
务
89. 分隔符
• CREATE TABLE ...
• ROW FORMAT DELIMITED
• FIELDS TERMINATED BY '001'
• COLLECTION ITEMS TERMINATED BY '002'
• MAP KEYS TERMINATED BY '003'
• LINES TERMINATED BY 'n'
• STORED AS TEXTFILE;
90. 指定序列化反序列化
• CREATE TABLE stations (usaf STRING, wban STRING, name STRING)
• ROW FORMAT SERDE
'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'
• WITH SERDEPROPERTIES (
• "input.regex" = "(d{6}) (d{5}) (.{29}) .*"
• );
• hive> SELECT * FROM stations LIMIT 4;
• 010000 99999 BOGUS NORWAY
• 010003 99999 BOGUS NORWAY
• 010010 99999 JAN MAYEN
• 010013 99999 ROST
91. 表命令
• create table xxx as select name,age from tab2
• ALTER TABLE source RENAME TO target;
• ALTER TABLE target ADD COLUMNS (col3
STRING);
• create table XXX as select c1,c2 from Tab2
92. 自定义函数UDF
• select myFun(age) from tab3;
• public class MyFun extends UDF {
• }
• 编写完以后注册 :
– create temporary function myFun as 'com.MyFun'
94. HBase
• start-hbase.sh
• hbase shell
• create 'tab1','col'
• list 显示表
• put 'tab1','row1', 'col:name', 'XiaoMing'
• put 'tab1', 'row1', 'col:age', '10'
• put 'tab2', 'row2', 'col:name', 'DaMing'
• 删除表
– disable 'tab1'
– drop 'tab1'
95. HBase API Get
• @Test
• public void testGet() throws IOException {
• Configuration conf = HBaseConfiguration.create();
• // conf.set("hbase.master.port", "localhost:PORT");
• // conf.set("hbase.zookeeper.quorum", "IP");
• HTable table = new HTable(conf, "tab1");
• Get get = new Get(Bytes.toBytes("r1"));
• get.addColumn(Bytes.toBytes("col"), Bytes.toBytes("name"));
• Result result = table.get(get);
• byte[] value = result.value();
• System.out.println("v:" + Bytes.toString(value));
• byte[] val = result.getValue(Bytes.toBytes("col"), Bytes.toBytes("name"));
• System.out.println("Value: " + Bytes.toString(val));
• }