Hi,

I am using MapR 2.1.2 with oozie on a single VM and when I run hadoop jobs through oozie I keep getting the error

'Task attempt_201307050821_0006_m_000000_0 is preempted(killed) because it took more than 10000ms on ephemeral slot'.

These jobs run fine just through hadoop when I run from command line. I do not understand this error message and can anyone please explain why this message is thrown when I run my oozie hadoop jobs ?

Thank you.

Edit:

Actually the error came from Hadoop, I checked the logs and Hadoop says this

2013-07-05 09:35:06,095 INFO org.apache.hadoop.mapred.JobInProgress: Job job_201307050821_0006 initialized successfully with 2 map tasks and 0 reduce tasks.
2013-07-05 09:35:06,171 INFO org.apache.hadoop.mapred.JobTracker: Adding task (JOB_SETUP) 'attempt_201307050821_0006_r_000001_0' to tip task_201307050821_0006_r_000001, for track
er 'tracker_mapr-ab:localhost/127.0.0.1:37620'
2013-07-05 09:35:39,672 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201307050821_0006_r_000001_0' has completed task_201307050821_0006_r_000001 successfully.
2013-07-05 09:35:39,674 INFO org.apache.hadoop.mapred.JobTracker: Adding task (MAP) 'attempt_201307050821_0006_m_000000_0' to tip task_201307050821_0006_m_000000, for tracker 'tr
acker_mapr-ab:localhost/127.0.0.1:37620'
2013-07-05 09:35:39,675 INFO org.apache.hadoop.mapred.JobInProgress: Choosing data-local task task_201307050821_0006_m_000000
2013-07-05 09:35:39,680 INFO org.apache.hadoop.mapred.JobTracker: Adding task (MAP) 'attempt_201307050821_0006_m_000001_0' to tip task_201307050821_0006_m_000001, for tracker 'tr
acker_mapr-ab:localhost/127.0.0.1:37620'
2013-07-05 09:35:39,680 INFO org.apache.hadoop.mapred.JobInProgress: Choosing data-local task task_201307050821_0006_m_000001
2013-07-05 09:35:49,941 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt_201307050821_0006_m_000000_0: Task attempt_201307050821_0006_m_000000_0 is preempted(kill
ed) because it took more than 10000ms on ephemeral slot on tasktracker tracker_mapr-ab:localhost/127.0.0.1:37620
2013-07-05 09:35:49,943 INFO org.apache.hadoop.mapred.JobInProgress: Adding task for cleanup attempt_201307050821_0006_m_000000_0 status = KILLED_UNCLEAN

asked 05 Jul '13, 00:40

kiran's gravatar image

kiran
36557
accept rate: 0%

edited 05 Jul '13, 00:52


I doubt that this "error" does not occur when submitting the job via hadoop job. This is not an error, it is a side effect of the express lane feature where TT's on a busy cluster try to sneak in a small job on ephemeral slots. If the job disqualifies as short because it ends up taking longer than expected it is preempted.

link

answered 09 Jul '13, 10:31

gera's gravatar image

gera
1.6k27
accept rate: 19%

I am running a single VM and I am just running small jobs for now. I have got rid of this error by increasing the timeout of the ephemeral slot (mapred.tasktracker.ephemeral.tasks.timeout).

(09 Jul '13, 14:42) kiran
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or __italic__
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Tags:

×29

Asked: 05 Jul '13, 00:40

Seen: 612 times

Last updated: 09 Jul '13, 14:42

powered by OSQA