阅读背景:

Cloudera Docker图像......一切都出错了

来源:互联网 

I am running a 16 GB Macbook pro with El Capitan OS. I installed the cloudera docker image using

我正在使用El Capitan OS运行16 GB Macbook Pro。我使用安装了cloudera docker镜像

docker pull cloudera/quickstart:latest
docker run --privileged=true --hostname=quickstart.cloudera -t -i 9f3ab06c7554 /usr/bin/docker-quickstart

the image boots fine, and I can see most services starting up

图像启动很好,我可以看到大多数服务启动

Started Hadoop historyserver:                              [  OK  ]
starting nodemanager, logging to /var/log/hadoop-yarn/yarn-yarn-nodemanager-quickstart.cloudera.out
Started Hadoop nodemanager:                                [  OK  ]
starting resourcemanager, logging to /var/log/hadoop-yarn/yarn-yarn-resourcemanager-quickstart.cloudera.out
Started Hadoop resourcemanager:                            [  OK  ]
starting master, logging to /var/log/hbase/hbase-hbase-master-quickstart.cloudera.out
Started HBase master daemon (hbase-master):                [  OK  ]
starting rest, logging to /var/log/hbase/hbase-hbase-rest-quickstart.cloudera.out
Started HBase rest daemon (hbase-rest):                    [  OK  ]
starting thrift, logging to /var/log/hbase/hbase-hbase-thrift-quickstart.cloudera.out
Started HBase thrift daemon (hbase-thrift):                [  OK  ]
Starting Hive Metastore (hive-metastore):                  [  OK  ]
Started Hive Server2 (hive-server2):                       [  OK  ]
Starting Sqoop Server:                                     [  OK  ]
Sqoop home directory: /usr/lib/sqoop2

Some failures as well

一些失败也是如此

Failure to start Spark history-server (spark-history-server[FAILED]n value: 1
Starting Hadoop HBase regionserver daemon: starting regionserver, logging to /var/log/hbase/hbase-hbase-regionserver-quickstart.cloudera.out
hbase-regionserver.
Starting hue:                                              [FAILED]

But once the bootup is complete, if I try to run anything it fails

但是一旦启动完成,如果我尝试运行它就会失败

for example trying to run spark-shell

例如尝试运行spark-shell

[root@quickstart /]# spark-shell
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000b0000000, 357892096, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 357892096 bytes for committing reserved memory.
# An error report file with more information is saved as:
# //hs_err_pid3113.log

or trying to run hive shell

或试图运行蜂巢壳

[root@quickstart /]# hive
Unable to determine Hadoop version information.
'hadoop version' returned:
Hadoop 2.6.0-cdh5.5.0 Subversion https://github.com/cloudera/hadoop -r fd21232cef7b8c1f536965897ce20f50b83ee7b2 Compiled by jenkins on 2015-11-09T20:37Z Compiled with protoc 2.5.0 From source with checksum 98e07176d1787150a6a9c087627562c This command was run using /usr/jars/hadoop-common-2.6.0-cdh5.5.0.jar
[root@quickstart /]#

My question is what can I do so that I can run the spark-shell and the hive shell successfully?

我的问题是我能做什么才能成功运行spark-shell和hive shell?

1 个解决方案

#1


3  

Since you are running Docker on a Mac, Docker runs under VirtualBox, not directly with the Mac's memory. (Same thing would happen in Windows).

由于您在Mac上运行Docker,Docker在VirtualBox下运行,而不是直接在Mac内存中运行。 (在Windows中也会发生同样的事情)。

You probably wouldn't get these errors on a Linux host since Docker isn't virtualized there.

你可能不会在Linux主机上得到这些错误,因为Docker没有在那里虚拟化。

The Cloudera quickstart vm recommends 8Gb of memory to run all the services and the docker vm is only 512Mb, I think.

Cloudera quickstart vm推荐8Gb内存来运行所有服务,而docker vm只有512Mb,我想。

The solution would be to stop the docker-machine instance, open VirtualBox, and increase the memory size of the "default" VM to the necessary amount.

解决方案是停止docker-machine实例,打开VirtualBox,并将“默认”VM的内存大小增加到必要的数量。


分享到: