Writing to Solace with Spark

hey @Aaron , yeah the connection and producer is created per partition and closed per partition, so it has the upper limit of number of spark partitions which we can control, which is generally not much and i can introduce some throttling here. and to answer your question , yes! i am iterating over rows per partition and sending the rows as message as i iterate over, and no the topic is not dynamic in my case , so i think we are good. Couple of questions?
Q1) Does solace (and i am assuming it does) have a connection pool and limit to how many connections any client can make with solace using the jms Api’s?
Q2) Since spark can process large amount of data as it is distributed over several nodes , let’s say i read huge data and try to send it over solace , i don’t want to bombard solace’s queue’s with my messages/events. how does solace control it. Like it sotres the incoming events in a buffer? and then pass on to broker ? how does it handle very huge number of events coming to it in a very short span, what can a client do/configure to avoid this?
thanks.