我们专注攀枝花网站设计 攀枝花网站制作 攀枝花网站建设
成都网站建设公司服务热线:400-028-6601

网站建设知识

十年网站开发经验 + 多家企业客户 + 靠谱的建站团队

量身定制 + 运营维护+专业推广+无忧售后,网站问题一站解决

Hive命令操作的示例分析

这篇文章主要为大家展示了“Hive命令操作的示例分析”,内容简而易懂,条理清晰,希望能够帮助大家解决疑惑,下面让小编带领大家一起研究并学习一下“Hive命令操作的示例分析”这篇文章吧。

创新互联从2013年创立,先为江津等服务建站,江津等地企业,进行企业商务咨询服务。为江津企业网站制作PC+手机+微官网三网同步一站式服务解决您的所有建站问题。

1、准备文本文件,启动hadoop[root@hadoop0 ~]# cat /opt/test.txt
JieJie
MengMeng
NingNing
JingJing
FengJie
[root@hadoop0 ~]# start-all.sh
Warning: $HADOOP_HOME is deprecated.
starting namenode, logging to /opt/hadoop/libexec/../logs/hadoop-root-namenode-hadoop0.out
localhost: starting datanode, logging to /opt/hadoop/libexec/../logs/hadoop-root-datanode-hadoop0.out
localhost: starting secondarynamenode, logging to /opt/hadoop/libexec/../logs/hadoop-root-secondarynamenode-hadoop0.out
starting jobtracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-jobtracker-hadoop0.out
localhost: starting tasktracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-tasktracker-hadoop0.out
2、进入命令行[root@hadoop0 ~]# hive
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Logging initialized using configuration in jar:file:/opt/hive/lib/hive-common-0.9.0.jar!/hive-log4j.properties
Hive history file=/tmp/root/hive_job_log_root_201509252001_1674268419.txt
3、查询昨天的表hive> select * from stu;
OK
JieJie 26       NULL
MM 24   NULL
Time taken: 17.05 seconds
4、显示数据库hive> show databases; 
OK
default
Time taken: 0.237 seconds
5、创建数据库hive> create database test; 
OK
Time taken: 0.259 seconds
hive> show databases;       
OK
default
test
6、使用数据库Time taken: 0.119 seconds
hive> use test;
OK
Time taken: 0.03 seconds
7、创建表textfile 默认格式,数据不做压缩,磁盘开销大,数据解析开销大。
可结合Gzip、Bzip2使用(系统自动检查,执行查询时自动解压),但使用这种方式,hive不会对数据进行切分,从而无法对数据进行并行操作。
SequenceFile是Hadoop API提供的一种二进制文件支持,其具有使用方便、可分割、可压缩的特点。
SequenceFile支持三种压缩选择:NONE, RECORD, BLOCK。 Record压缩率低,一般建议使用BLOCK压缩
rcfile是一种行列存储相结合的存储方式。首先,其将数据按行分块,保证同一个record在一个块上,避免读一个记录需要读取多个block。其次,块数据列式存储,有利于数据压缩和快速的列存取。
hive>  create table test1(str STRING)  STORED AS TEXTFILE; 
OK
Time taken: 0.598 seconds
--加载数据
hive> LOAD DATA LOCAL INPATH '/opt/test.txt' INTO TABLE test1; 
Copying data from file:/opt/test.txt
Copying file: file:/opt/test.txt
Loading data to table test.test1
OK
Time taken: 1.657 seconds
hive> select * from test1;
OK
JieJie
MengMeng
NingNing
JingJing
FengJie
Time taken: 0.388 seconds
hive> select count(*) from test1;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Starting Job = job_201509252000_0001, Tracking URL = http://hadoop0:50030/jobdetails.jsp?jobid=job_201509252000_0001
Kill Command = /opt/hadoop/libexec/../bin/hadoop job  -Dmapred.job.tracker=hadoop0:9001 -kill job_201509252000_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2015-09-25 20:09:55,796 Stage-1 map = 0%,  reduce = 0%
2015-09-25 20:10:19,806 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.67 sec
2015-09-25 20:10:53,218 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 6.95 sec
2015-09-25 20:10:54,223 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 6.95 sec
MapReduce Total cumulative CPU time: 6 seconds 950 msec
Ended Job = job_201509252000_0001
MapReduce Jobs Launched:
Job 0: Map: 1  Reduce: 1   Cumulative CPU: 6.95 sec   HDFS Read: 258 HDFS Write: 2 SUCCESS
Total MapReduce CPU Time Spent: 6 seconds 950 msec
OK
5
Time taken: 77.515 seconds


create table test1(str STRING)  STORED AS TEXTFILE; 
create table test2(str STRING) ;
hive> create table test3(str STRING)  STORED AS SEQUENCEFILE;
OK
Time taken: 0.112 seconds
 
hive> create table test4(str STRING)  STORED AS RCFILE; 
OK
Time taken: 0.502 seconds
8、把旧表数据导入新表INSERT OVERWRITE TABLE test4 SELECT * FROM test1;
9、设置hive参数hive> SET hive.exec.compress.output=true; 
hive> SET io.seqfile.compression.type=BLOCK;
10、查看hive参数hive> SET ; 

以上是“Hive命令操作的示例分析”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注创新互联行业资讯频道!


分享题目:Hive命令操作的示例分析
网站URL:http://mswzjz.cn/article/jcegos.html

其他资讯