Class

com.intersystems.spark

SparkSessionEx

Related Doc: package spark

Permalink

implicit class SparkSessionEx extends AnyRef

Extends the given session with IRIS specific methods.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. SparkSessionEx
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new SparkSessionEx(ss: SparkSession)

    Permalink

    ss

    The active Spark session.

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. def dataframe(text: String, column: String, lo: Long, hi: Long, partitions: ): DataFrame

    Permalink

    Executes a query on the default cluster to yield a suitably partitioned DataFrame.

    Executes a query on the default cluster to yield a suitably partitioned DataFrame.

    text

    The text of a query to be executed on the cluster or the name of an existing table in the cluster to load.

    column

    The name of the integral valued column in the result set with which to further partition the query.

    lo

    The lower bound of the partitioning column.

    hi

    The upper bound of the partitioning column.

    partitions

    The number of partitions per instance to create.

    returns

    The results of the query in the form of a suitably partitioned RDD.

    Exceptions thrown

    SQLException if a database access error occurs.

    See also

    JDBC to Other Databases for more on the semantics of the column, lo, hi, and partitions parameters.

  7. def dataframe(text: String, mfpi: = 1): DataFrame

    Permalink

    Executes a query on the default cluster to yield a suitably partitioned DataFrame.

    Executes a query on the default cluster to yield a suitably partitioned DataFrame.

    text

    The text of a query to be executed on the cluster or the name of an existing table in the cluster to load.

    mfpi

    The maximum number of factors per distinct instance to include in the factorization implicitly performed by the server, or 0 if no limit is necessary.

    returns

    The results of the query in the form of a suitably partitioned RDD.

    Exceptions thrown

    SQLException if a database access error occurs.

  8. def dataset[α](text: String, column: String, lo: Long, hi: Long, partitions: )(implicit arg0: Encoder[α]): Dataset[α]

    Permalink

    Executes a query on the default cluster to yield a suitably partitioned Dataset of elements of type α.

    Executes a query on the default cluster to yield a suitably partitioned Dataset of elements of type α.

    Typically α will be a user defined case class. For example:

    case class Person(name: String,age: Int)
    
    spark.dataset[Person]("SELECT name,age from Person","column",0,10000,2)

    returns a result set of type Dataset[Person] whose arguments name and age are matched against the meta data for the underlying DataFrame by their respective names.

    The only real requirement, however, is that the type be a member of the Encoder type class. This includes classes implementing the Product trait, and so, in particular, all small case classes.

    α

    Any type for which an Encoder is available; includes small case classes and extensions of trait Product.

    text

    The text of a query to be executed on the cluster or the name of an existing table in the cluster to load.

    column

    The name of the integral valued column in the result set with which to further partition the query.

    lo

    The lower bound of the partitioning column.

    hi

    The upper bound of the partitioning column.

    partitions

    The number of partitions per instance to create.

    returns

    The results of the query in the form of a suitably partitioned Dataset[α].

    Exceptions thrown

    SQLException if a database access error occurs.

    See also

    JDBC to Other Databases for more on the semantics of the column, lo, hi, and partitions parameters.

  9. def dataset[α](text: String, mfpi: = 1)(implicit arg0: Encoder[α]): Dataset[α]

    Permalink

    Executes a query on the default cluster to yield a suitably partitioned Dataset of elements of type α.

    Executes a query on the default cluster to yield a suitably partitioned Dataset of elements of type α.

    Typically α will be a user defined case class. For example:

    case class Person(name: String,age: Int)
    
    spark.dataset[Person]("SELECT name,age from Person")

    returns a result set of type Dataset[Person] whose arguments name and age are matched against the meta data for the underlying DataFrame by their respective names.

    The only real requirement, however, is that the type be a member of the Encoder type class. This includes classes implementing the Product trait, and so, in particular, all small case classes.

    α

    Any type for which an Encoder is available; includes small case classes and extensions of trait Product.

    text

    The text of a query to be executed on the cluster or the name of an existing table in the cluster to load.

    mfpi

    The maximum number of factors per distinct instance to include in the factorization implicitly performed by the server, or 0 if no limit is necessary.

    returns

    The results of the query in the form of a suitably partitioned Dataset[α].

    Exceptions thrown

    SQLException if a database access error occurs.

  10. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  11. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  12. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  13. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  14. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  15. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  16. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  17. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  18. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  19. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  20. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  21. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  22. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  23. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped

© 2024 InterSystems Corporation, Cambridge, MA. All rights reserved.    Privacy & Terms Guarantee Accessibility