I have a service that connects to a Mongo Cluster running on MongoAtlas. The test suite, for the time being, connects to the cluster runs a number of tests, destroys the created data and closes the connection, at least thats what it is supposed to do.
A chunk of the Test Suite:
class BandFeatureTests extends FeatureTest with BeforeAndAfter{
override val server = new EmbeddedHttpServer(
new GorillaServer{
override val overrideModules: Seq[TwitterModule] = Seq(
new DatabaseProviderTestModule()
)
override def warmup(): Unit = {/*no op*/}
},
flags = Map(
"dbscope" -> "testbands"
)
)
val mockDBProvider: MongoTestSpec = server.injector
.instance[MongoDataBaseConnector]
.asInstanceOf[MongoTestSpec]
override def afterAll(): Unit = {
mockDBProvider.close
}
test("Should Get Bands") {
server.httpGet(
path = "/bands",
andExpect = Status.Ok)
}
The dbscope flag will used to name the database, so that different test suites can boot different dbs and run in parallel w/o deleting each other's data
DatabaseTestProviderTestModule:
object DatabaseProviderTestModule extends TwitterModule {
flag(name="dbscope", default="KRONOSPROD", help = "Set The Database Scope")
@Provides
@Singleton
def providesDbscope(@Flag("dbscope") dbscope: String) : MongoDataBaseConnector = {
new MongoDataBaseConnector(dbscope)
}
}
MongoDatabaseConnector:
class MongoDataBaseConnector(dbscope: String) {
System.setProperty("org.mongodb.async.type", "netty")
val conf: Config = ConfigFactory.load()
val dbConfig: Config = conf.getConfig("db")
val user : String = dbConfig.getString("properties.username")
val pwd: String = dbConfig.getString("properties.pwd")
var url: String = s"mongodb+srv://$user:$pwd@cluster0-vrzo2.mongodb.net/test?retryWrites=true&w=majority"
val mongoClient: MongoClient = MongoClient(url)
val database: MongoDatabase = mongoClient.getDatabase(DATA_BASE_NAME)
def close = {
mongoClient.close()
}
//FOLLOWED BY ALL THE DOC DEFINITIONS AND CODECS
}
I have noticed that after the suite is done and all tests pass, the Mongo cluster continued to print to the sbt console:
23:41:43.182 [cluster-ClusterId{value='5d25262bba2897b3082238a4', description='null'}-cluster0-shard-00-01-vrzo2.mongodb.net:27017] DEBUG org.mongodb.driver.protocol.command - Sending command '{ "ismaster" : 1, "$db" : "admin", "$clusterTime" : { "clusterTime" : { "$timestamp" : { "t" : 1562715686, "i" : 2 } }, "signature" : { "hash" : { "$binary" : { "base64" : "fSDq9nBlwYNI+v2jd9bMV4HJzQI=", "subType" : "00" } }, "keyId" : 6709470559735906305 } } }' with request id 33 to database admin on connection [connectionId{localValue:1, serverValue:62629}] to server cluster0-shard-00-01-vrzo2.mongodb.net:27017
23:41:43.185 [cluster-ClusterId{value='5d25262bba2897b3082238a4', description='null'}-cluster0-shard-00-00-vrzo2.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Checking status of cluster0-shard-00-00-vrzo2.mongodb.net:27017
23:41:43.187 [cluster-ClusterId{value='5d25262bba2897b3082238a4', description='null'}-cluster0-shard-00-00-vrzo2.mongodb.net:27017] DEBUG org.mongodb.driver.protocol.command - Sending command '{ "ismaster" : 1, "$db" : "admin", "$clusterTime" : { "clusterTime" : { "$timestamp" : { "t" : 1562715686, "i" : 2 } }, "signature" : { "hash" : { "$binary" : { "base64" : "fSDq9nBlwYNI+v2jd9bMV4HJzQI=", "subType" : "00" } }, "keyId" : 6709470559735906305 } } }' with request id 34 to database admin on connection [connectionId{localValue:3, serverValue:83104}] to server cluster0-shard-00-00-vrzo2.mongodb.net:27017
23:41:43.187 [cluster-ClusterId{value='5d25262bba2897b3082238a4', description='null'}-cluster0-shard-00-02-vrzo2.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Checking status of cluster0-shard-00-02-vrzo2.mongodb.net:27017
23:41:43.189 [cluster-ClusterId{value='5d25262bba2897b3082238a4', description='null'}-cluster0-shard-00-02-vrzo2.mongodb.net:27017] DEBUG org.mongodb.driver.protocol.command - Sending command '{ "ismaster" : 1, "$db" : "admin", "$clusterTime" : { "clusterTime" : { "$timestamp" : { "t" : 1562715686, "i" : 2 } }, "signature" : { "hash" : { "$binary" : { "base64" : "fSDq9nBlwYNI+v2jd9bMV4HJzQI=", "subType" : "00" } }, "keyId" : 6709470559735906305 } } }' with request id 35 to database admin on connection [connectionId{localValue:4, serverValue:84492}] to server cluster0-shard-00-02-vrzo2.mongodb.net:27017
23:41:43.200 [cluster-ClusterId{value='5d25262bba2897b3082238a4', description='null'}-cluster0-shard-00-01-vrzo2.mongodb.net:27017] DEBUG org.mongodb.driver.protocol.command - Execution of command with request id 33 completed successfully in 18.31 ms on connection [connectionId{localValue:1, serverValue:62629}] to server cluster0-shard-00-01-vrzo2.mongodb.net:27017
23:41:43.201 [cluster-ClusterId{value='5d25262bba2897b3082238a4', description='null'}-cluster0-shard-00-01-vrzo2.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=cluster0-shard-00-01-vrzo2.mongodb.net:27017, type=REPLICA_SET_SECONDARY, TagSet{[Tag{name='region', value='US_EAST_1'}, Tag{name='provider', value='AWS'}, Tag{name='nodeType', value='ELECTABLE'}]}, roundTripTime=58.6 ms, state=CONNECTED}, {address=cluster0-shard-00-02-vrzo2.mongodb.net:27017, type=REPLICA_SET_SECONDARY, TagSet{[Tag{name='region', value='US_EAST_1'}, Tag{name='provider', value='AWS'}, Tag{name='nodeType', value='ELECTABLE'}]}, roundTripTime=55.3 ms, state=CONNECTED}, {address=cluster0-shard-00-00-vrzo2.mongodb.net:27017, type=REPLICA_SET_PRIMARY, TagSet{[Tag{name='region', value='US_EAST_1'}, Tag{name='provider', value='AWS'}, Tag{name='nodeType', value='ELECTABLE'}]}, roundTripTime=69.8 ms, state=CONNECTED}]
23:41:43.208 [cluster-ClusterId{value='5d25262bba2897b3082238a4', description='null'}-cluster0-shard-00-00-vrzo2.mongodb.net:27017] DEBUG org.mongodb.driver.protocol.command - Execution of command with request id 34 completed successfully in 21.74 ms on connection [connectionId{localValue:3, serverValue:83104}] to server cluster0-shard-00-00-vrzo2.mongodb.net:27017
23:41:43.209 [cluster-ClusterId{value='5d25262bba2897b3082238a4', description='null'}-cluster0-shard-00-00-vrzo2.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=cluster0-shard-00-01-vrzo2.mongodb.net:27017, type=REPLICA_SET_SECONDARY, TagSet{[Tag{name='region', value='US_EAST_1'}, Tag{name='provider', value='AWS'}, Tag{name='nodeType', value='ELECTABLE'}]}, roundTripTime=58.6 ms, state=CONNECTED}, {address=cluster0-shard-00-02-vrzo2.mongodb.net:27017, type=REPLICA_SET_SECONDARY, TagSet{[Tag{name='region', value='US_EAST_1'}, Tag{name='provider', value='AWS'}, Tag{name='nodeType', value='ELECTABLE'}]}, roundTripTime=55.3 ms, state=CONNECTED}, {address=cluster0-shard-00-00-vrzo2.mongodb.net:27017, type=REPLICA_SET_PRIMARY, TagSet{[Tag{name='region', value='US_EAST_1'}, Tag{name='provider', value='AWS'}, Tag{name='nodeType', value='ELECTABLE'}]}, roundTripTime=60.4 ms, state=CONNECTED}]
23:41:43.219 [cluster-ClusterId{value='5d25262bba2897b3082238a4', description='null'}-cluster0-shard-00-02-vrzo2.mongodb.net:27017] DEBUG org.mongodb.driver.protocol.command - Execution of command with request id 35 completed successfully in 30.67 ms on connection [connectionId{localValue:4, serverValue:84492}] to server cluster0-shard-00-02-vrzo2.mongodb.net:27017
23:41:43.219 [cluster-ClusterId{value='5d25262bba2897b3082238a4', description='null'}-cluster0-shard-00-02-vrzo2.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=cluster0-shard-00-01-vrzo2.mongodb.net:27017, type=REPLICA_SET_SECONDARY, TagSet{[Tag{name='region', value='US_EAST_1'}, Tag{name='provider', value='AWS'}, Tag{name='nodeType', value='ELECTABLE'}]}, roundTripTime=58.6 ms, state=CONNECTED}, {address=cluster0-shard-00-02-vrzo2.mongodb.net:27017, type=REPLICA_SET_SECONDARY, TagSet{[Tag{name='region', value='US_EAST_1'}, Tag{name='provider', value='AWS'}, Tag{name='nodeType', value='ELECTABLE'}]}, roundTripTime=50.6 ms, state=CONNECTED}, {address=cluster0-shard-00-00-vrzo2.mongodb.net:27017, type=REPLICA_SET_PRIMARY, TagSet{[Tag{name='region', value='US_EAST_1'}, Tag{name='provider', value='AWS'}, Tag{name='nodeType', value='ELECTABLE'}]}, roundTripTime=60.4 ms, state=CONNECTED}]
23:41:53.203 [cluster-ClusterId{value='5d25262bba2897b3082238a4', description='null'}-cluster0-shard-00-01-vrzo2.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Checking status of cluster0-shard-00-01-vrzo2.mongodb.net:27017
23:41:53.204 [cluster-ClusterId{value='5d25262bba2897b3082238a4', description='null'}-cluster0-shard-00-01-vrzo2.mongodb.net:27017] DEBUG org.mongodb.driver.protocol.command - Sending command '{ "ismaster" : 1, "$db" : "admin", "
Now, frankly, I am not super sure where the problem is coming from but I did a few things, leaning my guess towards Mongo as the source.
I have confirmed in the debug mode that the object id of the MongoDataBaseProvider class that is being hit in the src code is the same as the one that is being hit by the mockDbProvider.close I was wondering if for some reason the Finatra Injector created a separate instance. Turn out it's not the case.
I have added in an extra .close() fall in an afterEach function.
override def afterEach(): Unit = { mockDBProvider.close }
Normally adding this should result in the afterAll() call to error out. Because there would not be an open connection to close. To my surprise it didn't. So I logged into MongoAtlas to see if any of the metrics. I have realized the the number of connections is negative, pictured below. The reasonable explanation for this is that the second .close() call got executed, driving the metrics below zero on every test run.
I am curious if I am terminating it wrongly or if anyone else ran into this problem.
P.S. Contacted Mongo Support about this, awaiting response