您好,登錄后才能下訂單哦!
===============================================================================
create table psn
(
id int,
name string,
likes array<string>,
address map<string,string>
)
partitioned by (age int)
row format delimited
fields terminated by '\t'
collection items terminated by '-'
map keys terminated by ':'
lines terminated by '\n';
====================================================================================
hive> load data local inpath '/root/a.txt' overwrite into table psn partition(age=10);
Loading data to table default.psn partition (age=10)
OK
Time taken: 3.817 seconds
=================================================================================
hive> select * from psn;
OK
1 zhang3 ["sing","tennis","running"] {"beijing":"daxing"} 10
2 li4 ["sing","pingpong","swim"] {"shanghai":"baoshan"} 10
3 wang5 ["read","joke","football"] {"guangzou":"baiyun"} 10
==============================================================================
需求:
一次性統計每種愛好出現了多少次,每個城市出現了多少次,每個區出現多少次。
分析:
這個需求有點像hive實現wordcount案例,或者說它就是兩個wc案例的聚合,只不過現在這個不用split。
在wc案例中,我們使用explode完美地解決了一列記錄wc操作。
但是在hive中的udtf函數(split/explode)中,select子句只能單獨出現一個udtf函數,且udtf函數不能與其它字段和函數一并使用。
#####只能select explode(..) from emp;
#####不能select explode(..), explode(..) from emp;
#####不能select id,explode(..) from emp;
這就會造成對于一些復雜邏輯就會出現無法處理的問題,就比如以上這個兩列記錄的wc操作。
這時候就需要用到lateral view了,它可以將udtf函數產生的多行結果組織成一張虛擬表。
===================================================================================
hive> select count(distinct c1),count(distinct c2),count(distinct c3)from psn
>lateral view explode(likes)t1 as c1
>lateral view explode(address)t2 as c2,c3;
#####t1和t2為經過udtf函數產生的虛擬表的表名,c1/c2/c3為字段別名
#####數組經過explode會產生一列數據,map集合產生兩列。
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-04-24 22:59:16,471 Stage-1 map = 0%, reduce = 0%
2019-04-24 22:59:25,681 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.76 sec
2019-04-24 22:59:36,268 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 4.15 sec
MapReduce Total cumulative CPU time: 4 seconds 150 msec
Ended Job = job_1556088929464_0004
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 4.15 sec HDFS Read: 14429 HDFS Write: 105 SUCCESS
Total MapReduce CPU Time Spent: 4 seconds 150 msec
OK
8 3 3
Time taken: 35.986 seconds, Fetched: 1 row(s)
=============================================================================
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。