Executes a query on the default cluster to yield a suitably partitioned DataFrame.
Executes a query on the default cluster to yield a suitably partitioned DataFrame.
The text of a query to be executed on the cluster or the name of an existing table in the cluster to load.
The name of the integral valued column in the result set with which to further partition the query.
The lower bound of the partitioning column.
The upper bound of the partitioning column.
The number of partitions per instance to create.
The results of the query in the form of a suitably partitioned RDD.
SQLException
if a database access error occurs.
JDBC to Other Databases
for more on the semantics of the column
, lo
, hi
, and partitions
parameters.
Executes a query on the default cluster to yield a suitably partitioned DataFrame.
Executes a query on the default cluster to yield a suitably partitioned DataFrame.
The text of a query to be executed on the cluster or the name of an existing table in the cluster to load.
The maximum number of factors per distinct instance to include in the factorization implicitly performed by the server, or 0 if no limit is necessary.
The results of the query in the form of a suitably partitioned RDD.
SQLException
if a database access error occurs.
Executes a query on the default cluster to yield a suitably partitioned Dataset of elements of type α.
Executes a query on the default cluster to yield a suitably partitioned Dataset of elements of type α.
Typically α will be a user defined case class. For example:
case class Person(name: String,age: Int) spark.dataset[Person]("SELECT name,age from Person","column",0,10000,2)
returns a result set of type Dataset[Person] whose arguments name
and
age
are matched against the meta data for the underlying DataFrame by
their respective names.
The only real requirement, however, is that the type be a member of the
Encoder
type class. This includes classes implementing the Product
trait, and so, in particular, all small case classes.
Any type for which an Encoder is available; includes small case classes and extensions of trait Product.
The text of a query to be executed on the cluster or the name of an existing table in the cluster to load.
The name of the integral valued column in the result set with which to further partition the query.
The lower bound of the partitioning column.
The upper bound of the partitioning column.
The number of partitions per instance to create.
The results of the query in the form of a suitably partitioned Dataset[α].
SQLException
if a database access error occurs.
JDBC to Other Databases
for more on the semantics of the column
, lo
, hi
, and partitions
parameters.
Executes a query on the default cluster to yield a suitably partitioned Dataset of elements of type α.
Executes a query on the default cluster to yield a suitably partitioned Dataset of elements of type α.
Typically α will be a user defined case class. For example:
case class Person(name: String,age: Int) spark.dataset[Person]("SELECT name,age from Person")
returns a result set of type Dataset[Person] whose arguments name
and
age
are matched against the meta data for the underlying DataFrame by
their respective names.
The only real requirement, however, is that the type be a member of the
Encoder
type class. This includes classes implementing the Product
trait, and so, in particular, all small case classes.
Any type for which an Encoder is available; includes small case classes and extensions of trait Product.
The text of a query to be executed on the cluster or the name of an existing table in the cluster to load.
The maximum number of factors per distinct instance to include in the factorization implicitly performed by the server, or 0 if no limit is necessary.
The results of the query in the form of a suitably partitioned Dataset[α].
SQLException
if a database access error occurs.
© 2024 InterSystems Corporation, Cambridge, MA. All rights reserved. Privacy & Terms Guarantee Accessibility
Extends the given session with IRIS specific methods.