the RDF dataset in Silk currently loads all data to an in-memory Jena Model and thus is limited by the available memory. I just improved the plugin description to make this clear.
As Jindřich wrote, the preferred way of handling large RDF datasets is to load them into an RDF store, such as Virtuoso.
For large datasets, there is also commercial support for processing them on a Spark Cluster. Please contact me in case you are interested in that.