JobTracker makes different TaskTrackers process same input. When tasks complete, they announce this fact to the JobTracker. Whichever copy of a task finishes first becomes the definitive copy. If other copies were executing speculatively, Hadoop tells the TaskTrackers to abandon the tasks and discard their outputs. The Reducers then receive their inputs from whichever Mapper completed successfully, first.When data is injected for map reduce job in Hadoop, the Job tracker takes care of the job and distributes the task to task tracker
Liked By
Write Answer
How does speculative execution work in Hadoop?
Join MindStick Community
You have need login or register for voting of answers or question.
Anonymous User
20-Jan-2015