Wednesday, February 26, 2020

7 Simple Steps Mongodb sharding

HI There,

Simple steps to configure and test mongo db sharding on same server. I have configured config server, mongos, shards on same server. There i will separate them with port.

1. Configure Config servers

pwd
mkdir configsrv
cd configsrv/
mkdir config1 config2
cd
mongod --configsvr --dbpath /home/mongod/configsrv/config1 --port 20000 --replSet RS_CONFIGSRV --logpath /home/mongod/configsrv/config1/config1.log --fork
mongod --configsvr --dbpath /home/mongod/configsrv/config2 --port 20001 --replSet RS_CONFIGSRV --logpath /home/mongod/configsrv/config2/config2.log --fork
mongo --port 20000
rs.initiate()
rs.add("localhost:20001")


2. Start mongos with config servers

mongos --configdb RS_CONFIGSRV/localhost:20000,localhost:20001 logpath /home/mongod/mongos.log --fork


3. Configure your first shard server

mkdir -p shard1/server1
mkdir -p shard1/server2
mongod --shardsvr --dbpath /home/mongod/shard1/server1 --port 30001 --replSet r1 --logpath /home/mongod/shard1/server1/mongod.log --fork
mongod --shardsvr --dbpath /home/mongod/shard1/server2 --port 30002 --replSet r1 --logpath /home/mongod/shard1/server2/mongod.log --fork
mongo --port 30001
rs.initiate()
rs.add("localhost:30002")


4. Configure your Second shard server

mkdir -p shard2/server1
mkdir -p shard2/server2
mongod --shardsvr --dbpath /home/mongod/shard2/server1 --port 40001 --replSet r2 --logpath /home/mongod/shard2/server1/mongod.log --fork
mongod --shardsvr --dbpath /home/mongod/shard2/server2 --port 40002 --replSet r2 --logpath /home/mongod/shard2/server2/mongod.log --fork
mongo --port 40001
rs.initiate()
rs.add("localhost:40002")

5. Create a database in primary shard you want to keep the collection

mongo --port 30001
use test_db
db.test_collection.ensureIndex( { _id : "hashed" } )

6. Add shards to mongos

mongo
sh.addShard("r1/localhost:30001,localhost:30002")
sh.addShard("r2/localhost:40001,localhost:40002")
sh.status()

7. Enable sharding. add collection and insert dummy data

sh.enableSharding("test_db")
sh.shardCollection("test_db.test_collection", { "_id": "hashed" } )
use test_db
for (var i = 1; i <= 5000; i++) db.test_collection.insert( { x : i } )


8. Result:

mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("5e56411ab72635cd8d291541")
  }
  shards:
        {  "_id" : "r1",  "host" : "r1/localhost:30001,localhost:30002",  "state" : 1 }
        {  "_id" : "r2",  "host" : "r2/localhost:40001,localhost:40002",  "state" : 1 }
  active mongoses:
        "4.0.16" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                r1      1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : r1 Timestamp(1, 0)
        {  "_id" : "test_db",  "primary" : "r1",  "partitioned" : true,  "version" : {  "uuid" : UUID("ab3ca1c4-de9b-4273-b787-f61193b6b614"),  "lastMod" : 1 } }
                test_db.test_collection
                        shard key: { "_id" : "hashed" }
                        unique: false
                        balancing: true
                        chunks:
                                r1      2
                                r2      2
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-4611686018427387902") } on : r1 Timestamp(1, 0)
                        { "_id" : NumberLong("-4611686018427387902") } -->> { "_id" : NumberLong(0) } on : r1 Timestamp(1, 1)
                        { "_id" : NumberLong(0) } -->> { "_id" : NumberLong("4611686018427387902") } on : r2 Timestamp(1, 2)
                        { "_id" : NumberLong("4611686018427387902") } -->> { "_id" : { "$maxKey" : 1 } } on : r2 Timestamp(1, 3)

mongos>

Tuesday, February 18, 2020

How to see current connections to mongodb

Use below 2 commands to total count of connections:


testRepl:PRIMARY> var status=db.serverStatus()
testRepl:PRIMARY> status.connections
{ "current" : 9, "available" : 51191, "totalCreated" : 29 }

OR


testRepl:PRIMARY> db.serverStatus().connections
{ "current" : 9, "available" : 51191, "totalCreated" : 29 }
testRepl:PRIMARY> 


 To get connection per client you can 


db.currentOp(true).inprog.reduce(
  (accumulator, connection) => {
    ipaddress = connection.client ? connection.client.split(":")[0] : "Internal";
    accumulator[ipaddress] = (accumulator[ipaddress] || 0) + 1;
    accumulator["TOTAL_CONNECTION_COUNT"]++;
    return accumulator;
  },
  { TOTAL_CONNECTION_COUNT: 0 }
)
OUTPUT:
testRepl:PRIMARY> db.currentOp(true).inprog.reduce(
...   (accumulator, connection) => {
...     ipaddress = connection.client ? connection.client.split(":")[0] : "Internal";
...     accumulator[ipaddress] = (accumulator[ipaddress] || 0) + 1;
...     accumulator["TOTAL_CONNECTION_COUNT"]++;
...     return accumulator;
...   },
...   { TOTAL_CONNECTION_COUNT: 0 }
... )
{
        "TOTAL_CONNECTION_COUNT" : 59,
        "172.31.30.18" : 5,
        "172.31.16.25" : 3,
        "172.31.31.40" : 1,
        "Internal" : 50
}
testRepl:PRIMARY> 
You can see internal as 50. These are all mongodb internal connections, If you wish to
see those, use below commands.
db.currentOp(true).inprog.filter(connection => !connection.client).map(connection => connection.desc);
OUTPUT:testRepl:PRIMARY> db.currentOp(true).inprog.filter(connection => !connection.client).map(connection => connection.desc);[ "NoopWriter", "replication-3", "replexec-15", "replexec-14", "repl writer worker 14", "repl writer worker 13", "repl writer worker 10", "repl writer worker 9", "repl writer worker 7", "repl writer worker 6", "repl writer worker 5", "WT RecordStoreThread: local.oplog.rs", "repl writer worker 5", "WTOplogJournalThread", "repl writer worker 12", "SessionKiller", "repl writer worker 8", "ReplBatcher", "repl writer worker 4", "WTCheckpointThread", "monitoring keys for HMAC", "WTIdleSessionSweeper", "TTLMonitor", "repl writer worker 0", "WTJournalFlusher", "clientcursormon", "rsBackgroundSync", "repl writer worker 4", "rsSync", "ftdc", "repl writer worker 2", "repl writer worker 6", "repl writer worker 10", "repl writer worker 11", "repl writer worker 7", "SyncSourceFeedback", "initandlisten", "LogicalSessionCacheRefresh", "repl writer worker 3", "repl writer worker 3", "repl writer worker 11", "repl writer worker 12", "repl writer worker 8", "repl writer worker 1", "LogicalSessionCacheReap", "repl writer worker 15", "repl writer worker 9", "ApplyBatchFinalizerForJournal", "repl writer worker 0", "repl writer worker 2"]testRepl:PRIMARY>