Solace com.solacecoe.connectors:pubsubplus-connector-spark:3.1.2 with aws databricks(16.4 LTS (includes Apache Spark 3.5.2, Scala 2.12) and 15.4 LTS (includes Apache Spark 3.5.0, Scala 2.12)

I tried using the Solace connector com.solacecoe.connectors:pubsubplus-connector-spark:3.1.2 with AWS Databricks on both 16.4 LTS (includes Apache Spark 3.5.2, Scala 2.12) and 15.4 LTS (includes Apache Spark 3.5.0, Scala 2.12), but in both cases, I am getting the error mentioned below.

I understand this is likely a version mismatch issue. Could you please advise which Databricks LTS version is compatible with this Solace connector?

com.solacecoe.connectors.spark.streaming.solace.exceptions.SolaceSessionException: com.solacesystems.jcsmp.ClosedFacilityException: Session is closed.

at com.solacecoe.connectors.spark.streaming.solace.SolaceBroker.handleException(SolaceBroker.java:447) 	at com.solacecoe.connectors.spark.streaming.solace.SolaceBroker.initProducer(SolaceBroker.java:261) 	at com.solacecoe.connectors.spark.streaming.SolaceMicroBatch.<init>(SolaceMicroBatch.java:89) 	at com.solacecoe.connectors.spark.SolaceScan.toMicroBatchStream(SolaceScan.java:44) 	at org.apache.spark.sql.execution.streaming.MicrobatchStreamingQueryOptimizer$$anonfun$translateV2RelationsToScanRelation$1.$anonfun$applyOrElse$2(MicrobatchStreamingQueryOptimizer.scala:263) 	at scala.collection.mutable.HashMap.getOrElseUpdate(HashMap.scala:86) 	at org.apache.spark.sql.execution.streaming.MicrobatchStreamingQueryOptimizer$$anonfun$translateV2RelationsToScanRelation$1.applyOrElse(MicrobatchStreamingQueryOptimizer.scala:261) 	at org.apache.spark.sql.execution.streaming.MicrobatchStreamingQueryOptimizer$$anonfun$translateV2RelationsToScanRelation$1.applyOrElse(MicrobatchStreamingQueryOptimizer.scala:258) 	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:521) 	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:85) 	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:521) 	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:42) 	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:361) 	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:357) 	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:42) 	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:42) 	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:497) 	at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:465) 	at org.apache.spark.sql.execution.streaming.MicrobatchStreamingQueryOptimizer.translateV2RelationsToScanRelation(MicrobatchStreamingQueryOptimizer.scala:258) 	at org.apache.spark.sql.execution.streaming.MicrobatchStreamingQueryOptimizer.injectOperatorPushDown(MicrobatchStreamingQueryOptimizer.scala:210) 	at org.apache.spark.sql.execution.streaming.MicrobatchStreamingQueryOptimizer.apply(MicrobatchStreamingQueryOptimizer.scala:280) 	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.planQuery(MicroBatchExecution.scala:282) 	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan$lzycompute(MicroBatchExecution.scala:375) 	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan(MicroBatchExecution.scala:375) 	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.initSources(MicroBatchExecution.scala:392) 	at org.apache.spark.sql.execution.streaming.StreamExecution.$anonfun$runStream$2(StreamExecution.scala:428) 	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) 	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1463) 	at org.apache.spark.sql.execution.streaming.StreamExecution.$anonfun$runStream$1(StreamExecution.scala:389) 	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) 	at com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:49) 	at com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:293) 	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62) 	at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:289) 	at com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:47) 	at com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:44) 	at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:29) 	at com.databricks.logging.AttributionContextTracing.withAttributionTags(AttributionContextTracing.scala:96) 	at com.databricks.logging.AttributionContextTracing.withAttributionTags$(AttributionContextTracing.scala:77) 	at com.databricks.spark.util.PublicDBLogging.withAttributionTags(DatabricksSparkUsageLogger.scala:29) 	at com.databricks.spark.util.PublicDBLogging.withAttributionTags0(DatabricksSparkUsageLogger.scala:108) 	at com.databricks.spark.util.DatabricksSparkUsageLogger.withAttributionTags(DatabricksSparkUsageLogger.scala:216) 	at com.databricks.spark.util.UsageLogging.$anonfun$withAttributionTags$1(UsageLogger.scala:668) 	at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:780) 	at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:789) 	at com.databricks.spark.util.UsageLogging.withAttributionTags(UsageLogger.scala:668) 	at com.databricks.spark.util.UsageLogging.withAttributionTags$(UsageLogger.scala:666) 	at org.apache.spark.sql.execution.streaming.StreamExecution.withAttributionTags(StreamExecution.scala:87) 	at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:369) 	at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.$anonfun$run$3(StreamExecution.scala:287) 	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) 	at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) 	at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.$anonfun$run$2(StreamExecution.scala:287) 	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) 	at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51) 	at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:104) 	at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:109) 	at scala.util.Using$.resource(Using.scala:269) 	at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:108) 	at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:286) Caused by: com.solacesystems.jcsmp.ClosedFacilityException: Session is closed. 	at com.solacesystems.jcsmp.impl.JCSMPBasicSession.checkClosed(JCSMPBasicSession.java:1566) 	at com.solacesystems.jcsmp.impl.JCSMPBasicSession.sniffRouter(JCSMPBasicSession.java:437) 	at com.solacesystems.jcsmp.impl.JCSMPBasicSession.getMessageProducer(JCSMPBasicSession.java:771) 	at com.solacecoe.connectors.spark.streaming.solace.SolaceBroker.initProducer(SolaceBroker.java:247) 	... 58 more

Hi there, I’m Solly AI, I generate responses to your questions using the power of the Solace Agent Mesh:

Please remember that AI technology can make mistakes so ensure that you read our Disclaimer.

AI Generated responses are a work in progress, your help in evaluating my response would be awesome! If my response makes sense, please mark it as a solution or give a :+1:. If my response doesn’t make sense for your topic, please react to it with a :shaking_face:.

Looking at the documentation and community discussions, there appears to be a compatibility issue with the Solace Spark connector (com.solacecoe.connectors:pubsubplus-connector-spark:3.1.2) when connecting to AWS Databricks. The error ClosedFacilityException appears to be a Spring framework initialization error rather than a connection issue.

Multiple users have reported the same initialization error:

java.lang.IllegalArgumentException: Unable to instantiate factory class [org.springframework.cloud.config.client.ConfigServerConfigDataLocationResolver] for factory type [org.springframework.boot.context.config.ConfigDataLocationResolver]

This error occurs across different environments:

  • Local Mac workstations
  • Windows 11
  • WSL2
  • Different Java versions (17, 18, 21)

The error appears to be a Spring framework initialization issue rather than a connection problem to either the software broker or AWS Databricks. Since this is happening during startup before any connection attempts are made, the actual broker connectivity cannot be verified.

For the most up-to-date information on compatibility and configuration, please refer to: