eclipse 如何在一个普通的java project里运行hadoop M/R(而不建Map/Reduce project)

java project代码如下:
package hhhhhhhh;
import mypackage.demo;
public class ceshi {
public void main() throws Exception{
demo one=new demo();
one.setQueryString("市");
one.getResultList();
}
}
mypackage.demo是我在windows eclipse建立一个Map/Reduce工程测试成功之后打包封装的类,这里面和hadoop相关的的大概代码如下:
private static Configuration conf = new Configuration();
conf.set("mapred.jar", "demo.jar");
Job job = Job.getInstance(conf,"word count");
job.setJarByClass(demo.class);

//设置map类
job.setMapperClass(MyMapper.class);
job.setMapOutputKeyClass(LongWritable.class);
job.setMapOutputValueClass(XXX.class); //自定义数据类型
job.setReducerClass(MyReducer.class);
FileInputFormat.addInputPath(job, new Path("hdfs://10.10.10.10:8020/input/"));
FileOutputFormat.setOutputPath(job, new Path("hdfs://10.10.10.10:8020/output/"));
job.waitForCompletion(true);

eclipse在win8上,hadoop平台建在linux虚拟机上,一运行该java工程就报错:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration at mypackage...........
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.conf.Configuration
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
... 1 more

1、下载Hadoop-eclipse-plugin-1.2.1.jar,并将之复制到eclipse/plugins下。

2、打开map-reduce视图
在eclipse中,打开window——>open perspetive——>other,选择map/reduce。

3、选择Map/Reduce Locations标签页,新建一个Location

4、在project exploer中,可以浏览刚才定义站点的文件系统

5、准备测试数据,并上传到hdfs中。
liaoliuqingdeMacBook-Air:Downloads liaoliuqing$ hadoop fs -mkdir in
liaoliuqingdeMacBook-Air:Downloads liaoliuqing$ hadoop fs -copyFromLocal maxTemp.txt in
liaoliuqingdeMacBook-Air:Downloads liaoliuqing$ hadoop fs -ls in
Found 1 items
-rw-r--r-- 1 liaoliuqing supergroup 953 2014-12-14 09:47 /user/liaoliuqing/in/maxTemp.txt

其中maxTemp.txt的内容如下:
123456798676231190101234567986762311901012345679867623119010123456798676231190101234561+00121534567890356
123456798676231190101234567986762311901012345679867623119010123456798676231190101234562+01122934567890456
123456798676231190201234567986762311901012345679867623119010123456798676231190101234562+02120234567893456
123456798676231190401234567986762311901012345679867623119010123456798676231190101234561+00321234567803456
123456798676231190101234567986762311902012345679867623119010123456798676231190101234561+00429234567903456
123456798676231190501234567986762311902012345679867623119010123456798676231190101234561+01021134568903456
123456798676231190201234567986762311902012345679867623119010123456798676231190101234561+01124234578903456
123456798676231190301234567986762311905012345679867623119010123456798676231190101234561+04121234678903456
123456798676231190301234567986762311905012345679867623119010123456798676231190101234561+00821235678903456

6、准备map-reduce程序
程序请见http://blog.csdn.net/jediael_lu/article/details/37596469

7、运行程序
MaxTemperature.Java——>run as——>run configuration
在arguments中填入输入及输出目录,开始run。

此处是在hdfs中运行程序,事实上也可以在本地文件系统中运行程序,此方法可以方便的用于程序调试。
如在参数中填入:

/Users/liaoliuqing/in /Users/liaoliuqing/out
即可。

8、以下是eclise console中的输出内容
14/12/14 10:52:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/12/14 10:52:05 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/12/14 10:52:05 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/12/14 10:52:05 INFO input.FileInputFormat: Total input paths to process : 1
14/12/14 10:52:05 WARN snappy.LoadSnappy: Snappy native library not loaded
14/12/14 10:52:06 INFO mapred.JobClient: Running job: job_local1815770300_0001
14/12/14 10:52:06 INFO mapred.LocalJobRunner: Waiting for map tasks
14/12/14 10:52:06 INFO mapred.LocalJobRunner: Starting task: attempt_local1815770300_0001_m_000000_0
14/12/14 10:52:06 INFO mapred.Task: Using ResourceCalculatorPlugin : null
14/12/14 10:52:06 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/liaoliuqing/in/maxTemp.txt:0+953
14/12/14 10:52:06 INFO mapred.MapTask: io.sort.mb = 100
14/12/14 10:52:06 INFO mapred.MapTask: data buffer = 79691776/99614720
14/12/14 10:52:06 INFO mapred.MapTask: record buffer = 262144/327680
14/12/14 10:52:06 INFO mapred.MapTask: Starting flush of map output
14/12/14 10:52:06 INFO mapred.MapTask: Finished spill 0
14/12/14 10:52:06 INFO mapred.Task: Task:attempt_local1815770300_0001_m_000000_0 is done. And is in the process of commiting
14/12/14 10:52:06 INFO mapred.LocalJobRunner:
14/12/14 10:52:06 INFO mapred.Task: Task 'attempt_local1815770300_0001_m_000000_0' done.
14/12/14 10:52:06 INFO mapred.LocalJobRunner: Finishing task: attempt_local1815770300_0001_m_000000_0
14/12/14 10:52:06 INFO mapred.LocalJobRunner: Map task executor complete.
14/12/14 10:52:06 INFO mapred.Task: Using ResourceCalculatorPlugin : null
14/12/14 10:52:06 INFO mapred.LocalJobRunner:
14/12/14 10:52:06 INFO mapred.Merger: Merging 1 sorted segments
14/12/14 10:52:06 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 90 bytes
14/12/14 10:52:06 INFO mapred.LocalJobRunner:
14/12/14 10:52:06 INFO mapred.Task: Task:attempt_local1815770300_0001_r_000000_0 is done. And is in the process of commiting
14/12/14 10:52:06 INFO mapred.LocalJobRunner:
14/12/14 10:52:06 INFO mapred.Task: Task attempt_local1815770300_0001_r_000000_0 is allowed to commit now
14/12/14 10:52:06 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1815770300_0001_r_000000_0' to hdfs://localhost:9000/user/liaoliuqing/out
14/12/14 10:52:06 INFO mapred.LocalJobRunner: reduce > reduce
14/12/14 10:52:06 INFO mapred.Task: Task 'attempt_local1815770300_0001_r_000000_0' done.
14/12/14 10:52:07 INFO mapred.JobClient: map 100% reduce 100%
14/12/14 10:52:07 INFO mapred.JobClient: Job complete: job_local1815770300_0001
14/12/14 10:52:07 INFO mapred.JobClient: Counters: 19
14/12/14 10:52:07 INFO mapred.JobClient: File Output Format Counters
14/12/14 10:52:07 INFO mapred.JobClient: Bytes Written=43
14/12/14 10:52:07 INFO mapred.JobClient: File Input Format Counters
14/12/14 10:52:07 INFO mapred.JobClient: Bytes Read=953
14/12/14 10:52:07 INFO mapred.JobClient: FileSystemCounters
14/12/14 10:52:07 INFO mapred.JobClient: FILE_BYTES_READ=450
14/12/14 10:52:07 INFO mapred.JobClient: HDFS_BYTES_READ=1906
14/12/14 10:52:07 INFO mapred.JobClient: FILE_BYTES_WRITTEN=135618
14/12/14 10:52:07 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=43
14/12/14 10:52:07 INFO mapred.JobClient: Map-Reduce Framework
14/12/14 10:52:07 INFO mapred.JobClient: Reduce input groups=5
14/12/14 10:52:07 INFO mapred.JobClient: Map output materialized bytes=94
14/12/14 10:52:07 INFO mapred.JobClient: Combine output records=0
14/12/14 10:52:07 INFO mapred.JobClient: Map input records=9
14/12/14 10:52:07 INFO mapred.JobClient: Reduce shuffle bytes=0
14/12/14 10:52:07 INFO mapred.JobClient: Reduce output records=5
14/12/14 10:52:07 INFO mapred.JobClient: Spilled Records=16
14/12/14 10:52:07 INFO mapred.JobClient: Map output bytes=72
14/12/14 10:52:07 INFO mapred.JobClient: Total committed heap usage (bytes)=329252864
14/12/14 10:52:07 INFO mapred.JobClient: SPLIT_RAW_BYTES=118
14/12/14 10:52:07 INFO mapred.JobClient: Map output records=8
14/12/14 10:52:07 INFO mapred.JobClient: Combine input records=0

14/12/14 10:52:07 INFO mapred.JobClient: Reduce input records=8追问

都说了已经在mapreduce project下测试成功,你这明显是照抄来的

温馨提示:内容为网友见解,仅供参考
无其他回答

怎么开启两个eclipse,dd ms不占用
java.lang.RuntimeException: java.lang.ClassNotFoundException: com.qin.sort.TestSort$SMapper at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:857)at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)at org.apache.hadoop.mapred.MapTask.runNewMapper...

简述Hadoop的MapReduce与Googl的MapReducc 之间的关系
map函数和reduce函数是交给用户实现的,这两个函数定义了任务本身。 map函数:接受一个键值对(key-value pair),产生一组中间键值对。MapReduce框架会将map函数产生的中间键值对里键相同的值传递给一个reduce函数。 reduce函数:接受一个键,以及相关的一组值,将这组值进行合并产生一组规模更小的值(通常只有一个或零...

MapReduce如何保证结果文件中key的唯一性
1、打开Hadoop集群,打开主机master的终端,输入【ifconfig】命令查看主机IP地址。2、使用SecureCRT软件连接到Hadoop集群的主机。3、首先进入到hadoop目录下的bin目录下,因为要将代码文件上传到这个目录下,所以先要打开这个目录,然后输入【rz】命令准备上传代码文件。4、选中已经写好的两个代码文件,然后点...

如何查看Hadoop运行过程中产生日志
import org.apache.hadoop.mapred.JobConf;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.db.DBConfiguration;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import...

如何查看Hadoop运行过程中产生日志
用hadoop也算有一段时间了,一直没有注意过hadoop运行过程中,产生的数据日志,比如说System打印的日志,或者是log4j,slf4j等记录的日志,存放在哪里,日志信息的重要性,在这里散仙就不用多说了,调试任何程序基本上都得需要分析日志。 hadoop的日志主要是MapReduce程序,运行过程中,产生的一些数据日志,除了系统的日志外,还...

相似回答