| sparkR.init {SparkR} | R Documentation | 
This function initializes a new SparkContext.
sparkR.init(
  master = "",
  appName = "SparkR",
  sparkHome = Sys.getenv("SPARK_HOME"),
  sparkEnvir = list(),
  sparkExecutorEnv = list(),
  sparkJars = "",
  sparkPackages = ""
)
master | 
 The Spark master URL  | 
appName | 
 Application name to register with cluster manager  | 
sparkHome | 
 Spark Home directory  | 
sparkEnvir | 
 Named list of environment variables to set on worker nodes  | 
sparkExecutorEnv | 
 Named list of environment variables to be used when launching executors  | 
sparkJars | 
 Character vector of jar files to pass to the worker nodes  | 
sparkPackages | 
 Character vector of package coordinates  | 
sparkR.init since 1.4.0
## Not run: 
##D sc <- sparkR.init("local[2]", "SparkR", "/home/spark")
##D sc <- sparkR.init("local[2]", "SparkR", "/home/spark",
##D                  list(spark.executor.memory="1g"))
##D sc <- sparkR.init("yarn-client", "SparkR", "/home/spark",
##D                  list(spark.executor.memory="4g"),
##D                  list(LD_LIBRARY_PATH="/directory of JVM libraries (libjvm.so) on workers/"),
##D                  c("one.jar", "two.jar", "three.jar"),
##D                  c("com.databricks:spark-avro_2.11:2.0.1"))
## End(Not run)