Apache Hadoop
Apache Hadoop
The Apache® Hadoop® project develops open-source software for reliable, scalable, distributed computing.
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.
Learn more »
Download »
Getting started »
Latest news
Release 3.5.0 available
2026 Apr 2
This is the first stable release of Apache Hadoop 3.5 line.
It contains 485 bug fixes, improvements and enhancements since 3.4.
Users are encouraged to read the
overview of major changes
since 3.4.
For details of 485 bug fixes, improvements, and other enhancements since the previous 3.4.3 release,
please check
release notes
and
changelog
Release 3.4.3 available
2026 Feb 24
This is a release of Apache Hadoop 3.4.3 line.
Users of Apache Hadoop 3.4.2 and earlier should upgrade to
this release.
All users are encouraged to read the
overview of major changes
since release 3.4.2.
For details of bug fixes, improvements, and other enhancements since
the previous 3.4.2 release, please check
release notes
and
changelog
This release does not include the bundle.jar containing the AWS SDK, used by the s3a connector
in the hadoop-aws module.
To use it, download from Maven Central the version of the SDK you wish to use:
For this release, the version to download is 2.35.4
Download the bundle-2.35.4.jar artifact and check its signature with
the accompanying bundle-2.35.4.jar.asc file.
Copy the JAR to share/hadoop/common/lib/
(Newer AWS SDK versions should work, though regressions are almost inevitable)
Release 3.4.2 available
2025 Aug 29
This is a release of Apache Hadoop 3.4.2 line.
Users of Apache Hadoop 3.4.1 and earlier should upgrade to
this release.
All users are encouraged to read the
overview of major changes
since release 3.4.1.
For details of bug fixes, improvements, and other enhancements since
the previous 3.4.1 release, please check
release notes
and
changelog
Release 3.4.1 available
2024 Oct 18
This is a release of Apache Hadoop 3.4.1 line.
Users of Apache Hadoop 3.4.0 and earlier should upgrade to
this release.
All users are encouraged to read the
overview of major changes
since release 3.4.0.
We have also introduced a lean tar which is a small tar file that does not contain the AWS SDK
because the size of AWS SDK is itself 500 MB. This can ease usage for non AWS users.
Even AWS users can add this jar explicitly if desired.
For details of bug fixes, improvements, and other enhancements since
the previous 3.4.0 release, please check
release notes
and
changelog
Release 3.4.0 available
2024 Mar 17
This is the first release of Apache Hadoop 3.4 line. It contains 2888 bug fixes, improvements and enhancements since 3.3.
Users are encouraged to read the
overview of major changes
For details of please check
release notes
and
changelog
Release archive →
News archive →
Modules
The project includes these modules:
Hadoop Common
: The common utilities that support the other Hadoop modules.
Hadoop Distributed File System (HDFS™)
: A distributed file system that provides high-throughput access to application data.
Hadoop YARN
: A framework for job scheduling and cluster resource management.
Hadoop MapReduce
: A YARN-based system for parallel processing of large data sets.
Who Uses Hadoop?
A wide variety of companies and organizations use Hadoop for both research and production.
Users are encouraged to add themselves to the Hadoop
Related projects
Other Hadoop-related projects at Apache include:
Ambari™
: A web-based tool for provisioning,
managing, and monitoring Apache Hadoop clusters which includes
support for Hadoop HDFS, Hadoop MapReduce, Hive, HCatalog, HBase,
ZooKeeper, Oozie, Pig and Sqoop. Ambari also provides a dashboard
for viewing cluster health such as heatmaps and ability to view
MapReduce, Pig and Hive applications visually alongwith features to
diagnose their performance characteristics in a user-friendly
manner.
Avro™
: A data serialization system.
Cassandra™
: A scalable multi-master database
with no single points of failure.
Chukwa™
: A data collection system for managing
large distributed systems.
HBase™
: A scalable, distributed database that
supports structured data storage for large tables.
Hive™
: A data warehouse infrastructure that provides
data summarization and ad hoc querying.
Mahout™
: A Scalable machine learning and data
mining library.
Ozone™
: A scalable, redundant, and
distributed object store for Hadoop.
Pig™
: A high-level data-flow language and execution
framework for parallel computation.
Spark™
: A fast and general compute engine for
Hadoop data. Spark provides a simple and expressive programming
model that supports a wide range of applications, including ETL,
machine learning, stream processing, and graph computation.
Submarine
: A unified AI platform which allows
engineers and data scientists to run Machine Learning and Deep Learning workload in
distributed cluster.
Tez™
: A generalized data-flow programming framework,
built on Hadoop YARN, which provides a powerful and flexible engine
to execute an arbitrary DAG of tasks to process data for both batch
and interactive use-cases. Tez is being adopted by Hive™, Pig™ and
other frameworks in the Hadoop ecosystem, and also by other
commercial software (e.g. ETL tools), to replace Hadoop™ MapReduce
as the underlying execution engine.
ZooKeeper™
: A high-performance coordination
service for distributed applications.
US