Apache Spark Basics

1708

INTRODUCTION

Apache Spark is an open source parallel processing framework that enables users to run large-scale data analytics applications across clustered computers.

BASICS

Apache Spark can process data from a variety of data repositories, including the Hadoop Distributed File System (HDFS), NoSQL databases and relational data stores such as Apache Hive. Spark supports in-memory processing to boost the performance of big data analytics applications, but it can also do conventional disk-based processing when data sets are too large to fit into the available system memory.

Spark became a top-level project of the Apache Software Foundation in February 2014, and Version 1.0 of Apache Spark was released in May 2014. The technology was initially designed in 2009 by researchers at the University of California, Berkeley, as a way to speed up processing jobs in Hadoop systems. Spark provides programmers with a potentially faster and more flexible alternative to MapReduce, the software framework that early versions of Hadoop were tied to. Spark’s developers say it can run jobs 100 times faster than MapReduce when processed in memory and 10 times faster on disk.

In addition, Spark can handle more than the batch processing applications that MapReduce is limited to running. The core Spark engine functions partly as an application programming interface (API) layer and underpins a set of related tools for managing and analyzing data, including a SQL query engine, a library of machine learning algorithms, a graph processing system and streaming data processing software.