How does speculative execution work in Hadoop?

Last updated:9/21/2020 9:14:54 PM

1 Answers

Anonymous User
Anonymous User

JobTracker makes different TaskTrackers process same input. When tasks complete, they announce this fact to the JobTracker. Whichever copy of a task finishes first becomes the definitive copy. If other copies were executing speculatively, Hadoop tells the TaskTrackers to abandon the tasks and discard their outputs. The Reducers then receive their inputs from whichever Mapper completed successfully, first.When data is injected for map reduce job in Hadoop, the Job tracker takes care of the job and distributes the task to task tracker