Hands on MapR -- Viadea
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Hands on MapR -- Viadea

on

  • 2,234 views

MapR

MapR

Statistics

Views

Total Views
2,234
Views on SlideShare
2,234
Embed Views
0

Actions

Likes
4
Downloads
96
Comments
3

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • Thank you for putting it together
    Are you sure you want to
    Your message goes here
    Processing…
  • warden!
    nice name.
    Are you sure you want to
    Your message goes here
    Processing…
  • Nice work!
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Hands on MapR -- Viadea Presentation Transcript

  • 1. Hands On MapR CLI only, no GUI☺ Viadea Zhu http://weibo.com/viadea March. 2012
  • 2. Agenda• MapR Architecture• Cluster Management• Volume• Mirror• Schedule• Snapshot• NFS• Managing Data• Users and Groups• Troubleshooting and Performance tunning
  • 3. MapR Architecture• Basic Services – CLDB – FileServer – Jobtracker – Tasktracker – Zookeeper – NFS – WebServer• wardenA process called the warden runs on all nodes to manage, monitor, and report on the other services on each node.The warden will not start any services unless ZooKeeper is reachable and more than half of the configured ZooKeeper nodes are live.
  • 4. Cluster Management• Bring up cluster:1.Start ZooKeeper on all nodes where it is installed, by issuing the followingcommand /etc/init.d/mapr-zookeeper start2.On one of the CLDB nodes and the node running the mapr-webserverservice, start the warden: /etc/init.d/mapr-warden start
  • 5. Cluster Management• Stop cluster(1):1. Determine which nodes are running the NFS gateway.[root@mdw]# /opt/mapr/bin/maprcli node list -filter"[rp==/*]and[svc==nfs]" -columns id,h,hn,svc, rpid servicehostname health ip4277269757083023248tasktracker,webserver,cldb,fileserver,nfs,hoststats,jobtracker mdw2 172.28.4.250,10.32.190.66,172.28.8.250,172.28.12.2503528082726925061986 tasktracker,fileserver,nfs,hoststatssdw1 2 172.28.4.1,172.28.8.1,172.28.12.15521777324064226112 fileserver,tasktracker,nfs,hoststatssdw3 0 172.28.8.3,172.28.12.3,172.28.4.33482126520576246764 fileserver,tasktracker,nfs,hoststatssdw5 0 172.28.4.5,172.28.8.5,172.28.12.54667932985226440135 fileserver,tasktracker,nfs,hoststatssdw7 0 172.28.8.7,172.28.12.7,172.28.4.7
  • 6. Cluster Management• Stop cluster(2):2. Determine which nodes are running the CLDB.[root@mdw]# /opt/mapr/bin/maprcli node list -filter"[rp==/*]and[svc==cldb]" -columns id,h,hn,svc, rpid servicehostname health ip4277269757083023248tasktracker,webserver,cldb,fileserver,nfs,hoststats,jobtracker mdw2 172.28.4.250,10.32.190.66,172.28.8.250,172.28.12.250
  • 7. Cluster Management• Stop cluster(3):3. List all non-CLDB nodes.[root@mdw]# /opt/mapr/bin/maprcli node list -filter"[rp==/*]and[svc!=cldb]" -columns id,h,hn,svc, rpid service hostnamehealth ip3528082726925061986 tasktracker,fileserver,nfs,hoststats sdw1 2172.28.4.1,172.28.8.1,172.28.12.15521777324064226112 fileserver,tasktracker,nfs,hoststats sdw3 0172.28.8.3,172.28.12.3,172.28.4.33482126520576246764 fileserver,tasktracker,nfs,hoststats sdw5 0172.28.4.5,172.28.8.5,172.28.12.54667932985226440135 fileserver,tasktracker,nfs,hoststats sdw7 0172.28.8.7,172.28.12.7,172.28.4.7
  • 8. Cluster Management• Stop cluster(4):4. Shut down all NFS instances./opt/mapr/bin/maprcli node services -nfs stop -nodes mdw sdw1 sdw3sdw5 sdw75. SSH into each CLDB node and stop the warden./etc/init.d/mapr-warden stop6. SSH into each of the remaining nodes and stop the warden./etc/init.d/mapr-warden stop7. Stop the zookeeper on zookeeper node(s)./etc/init.d/mapr-zookeeper stop
  • 9. Cluster Management• Restart Webserver:/opt/mapr/adminuiapp/webserver stop/opt/mapr/adminuiapp/webserver start• Restart Services: (eg, tasktracker)maprcli node services -nodes mdw -tasktracker stopmaprcli node services -nodes mdw -tasktracker start• Grant full permission to chosen administrator OS user/opt/mapr/bin/maprcli acl edit -type cluster -user <user>:fc
  • 10. Cluster Management• Alarm Emailmaprcli alarm config save -values "AE_ALARM_AEQUOTA_EXCEEDED,1,test@example.com"maprcli alarm config save -values "NODE_ALARM_CORE_PRESENT,1,viadea.zhu@emc.com“• List Alarm[gpadmin@mdw]$ maprcli alarm list -type clusteralarm state description entity alarm name alarm statechange time1 One or more licenses is about to expire within 28 days CLUSTER CLUSTER_ALARM_LICENSE_NEAR_EXPIRATION 1330171978541[gpadmin@mdw]$ maprcli alarm list -type nodealarm state description entity alarm name alarm statechange time1 Can not determine if service: cldb is running. Check logs at: /opt/mapr/logs/cldb.log sdw1 NODE_ALARM_SERVICE_CLDB_DOWN 13242743867631 Node has core file(s) mdw NODE_ALARM_CORE_PRESENT 1330145172579
  • 11. Cluster Management• List Nodesmaprcli node list -columns id,h,hn,br,da,dtotal,dused,davail,fs-heartbeatmaprcli node list -columns id,br,fs-heartbeat,jt-heartbeat• Remove NodesTake sdw5 for example:1. Stop warden on sdw5:/etc/init.d/mapr-warden stop2. Remove on CLDB node:maprcli node remove -nodes sdw5 -zkconnect sdw1:5181
  • 12. Cluster Management• Reformat a nodeTake sdw5 for example:1. Stop warden:/etc/init.d/mapr-warden stop2. Remove the disktab file:rm /opt/mapr/conf/disktab3. Create a text file /tmp/disks.txt that lists all the disks and partitions to format for use by Greenplum HD EE.[root@sdw5 ~]# cat /tmp/disks.txt/data2/hdpee/storagefile4. Use disksetup to re-format the disks:/opt/mapr/server/disksetup -F /tmp/disks.txt5. Start the Warden:/etc/init.d/mapr-warden start
  • 13. Cluster Management• Add a new node/opt/mapr/server/configure.sh -C mdw -Z sdw1 -N ViadeaCluster/opt/mapr/server/disksetup -F /tmp/disks.txt/etc/init.d/mapr-warden start
  • 14. Volume• Turnoff compression[root@mdw ~]# hadoop mfs -ls|grep vardrwxrwxrwx Z - root root 1 2011-12-19 13:52 268435456 /var[root@mdw ~]# hadoop mfs -setcompression off /var[root@mdw ~]# hadoop mfs -ls|grep vardrwxrwxrwx U - root root 1 2011-12-19 13:52 268435456 /var• Create volumemaprcli volume create -name viadeavol -path /viadeavol -quota 1G - advisoryquota 200Mmaprcli volume create -name viadeavol.mirror -source viadeavol@viadeacluster -path /viadeavol_mirror -type 1
  • 15. Volume• List Volumemaprcli volume list -columns volumeid,volumetype,volumename,mountdir,mounted,aename,quota,used, totalused,actualreplication,rackpath• Viewing volume propertiesmaprcli volume info -name viadeavolmaprcli volume info -output terse -name viadeavol• Modify volumemaprcli volume modify -name viadeavol.mirror -source viadeavol
  • 16. Volume• Mount/Unmount Volumemaprcli volume unmount -name viadeavolmaprcli volume mount -name viadeavol• Remove volumemaprcli volume remove -name testvol• Setting default volume topologymaprcli config save -values "{"cldb.default.volume.topology":"/default-rack"}"maprcli config save -values "{"cldb.default.volume.topology":"/"}"
  • 17. Volume• CLDB only topology(1)1.Planning:CLDB only nodes: mdw,sdw1Other nodes: sdw3,sdw5,sdw72.Checking node id:maprcli node list -columns id,hostname,"topo(rack)"3.Move nodes to topology – “cldbonly”:maprcli node move -serverids 4277269757083023248,3528082726925061986 -topology /cldbonly4.Move CLDB volume to topology – “cldbonly”:maprcli volume move -name mapr.cldb.internal -topology /cldbonly
  • 18. Volume• CLDB only topology(2)5.Move non-CLDB nodes to topology – “noncldb”:maprcli node move -serverids 5521777324064226112,3482126520576246764,4667932985226440135 - topology /noncldb6.Move non-CLDB volumes to topology – “noncldb”:maprcli volume move -name mapr.var -topology /noncldbmaprcli volume move -name viadeavol -topology /noncldbmaprcli volume move -name mapr.hbase -topology /noncldbmaprcli volume move -name mapr.jobtracker.volume -topology /noncldbmaprcli volume move -name mapr.cluster.root -topology /noncldb
  • 19. Mirror• Local/Remote mirrormaprcli volume create -name viadeavol_mirror1 -source viadeavol@viadeacluster -path /viadeavol_mirror1 -type 1maprcli volume create -name viadeavol_mirror2 -source viadeavol@viadeacluster -path /viadeavol_mirror2 -type 1• Mirror Linkmaprcli volume link create -volume viadeavol -type mirror -path /maprfs::mirror::viadeavol
  • 20. Mirror• Sync Mirrors using “push”[root@mdw ~]# maprcli volume mirror push -name viadeavolStarting mirroring of volume viadeavol_mirror2Starting mirroring of volume viadeavol_mirror1Mirroring complete for volume viadeavol_mirror1Mirroring complete for volume viadeavol_mirror2Successfully completed mirror push to all local mirrors of volume viadeavol• Sync Mirror using “start”[root@mdw ~]# maprcli volume mirror start -full false -name viadeavol_mirror1messagesStarted mirror operation for volume(s) viadeavol_mirror1
  • 21. Mirror• Stop mirror sync[gpadmin@mdw viadea]$ maprcli volume mirror stop -name viadeavol_mirror1messagesStopped mirror operation for viadeavol_mirror1http://answers.mapr.com/questions/1773/about-stopping-mirrorAnswer:• Both mirror push and mirror start work the same way ... the destination of the mirror pulls the data. The difference is that mirror push is synchronous and the command will wait until the mirroring is complete, while mirror start is asynchronous and only kicks off the mirroring and returns immediately without waiting.• mirror stop works in both situations.
  • 22. Schedule• Create Schedulemaprcli schedule create -schedule {"name":"Schedule- 1","rules":[{"frequency":"once","retain":"1w","time":13,"date":"12 /5/2010"}]}• List Schedule[root@mdw binary]# maprcli schedule list -output verboseid name inuse rules1 Critical data 0 ...2 Important data 0 ...3 Normal data 1 ...4 mirror_sync 1 ...5 Schedule-1 0 ...
  • 23. Schedule• Remove Schedule• Modify Schedulemaprcli schedule modify -id 0 -name Newname -rules [{"frequency":"weekly","date":"sun","time":7,"retain":"2w"},{"fre quency":"daily","time":14,"retain":"1w"}]
  • 24. Snapshot• View snapshot of one volume[gpadmin@mdw viadea]$ hadoop fs -ls /viadeavol_mirror2/.snapshotFound 5 itemsdrwxrwxrwx - root root 7 2012-02-24 18:58 /viadeavol_mirror2/.snapshot/viadeavol_mirror2.mirrorsnap.24-Feb-2012-22-35-51drwxrwxrwx - root root 8 2012-02-24 22:32 /viadeavol_mirror2/.snapshot/viadeavol_mirror2.mirrorsnap.25-Feb-2012-01-48-25drwxrwxrwx - root root 10 2012-02-25 10:44 /viadeavol_mirror2/.snapshot/viadeavol_mirror2.mirrorsnap.25-Feb-2012-12-05-43drwxrwxrwx - root root 9 2012-02-24 23:00 /viadeavol_mirror2/.snapshot/viadeavol_mirror2.mirrorsnap.25-Feb-2012-11-09-49drwxrwxrwx - root root 0 1970-01-01 08:00 /viadeavol_mirror2/.snapshot/viadeavol_mirror2.mirrorsnap.24-Feb-2012-22-26-18
  • 25. Snapshot• Create snapshotmaprcli volume snapshot create -snapshotname test-snapshot -volume viadeavol• List snapshotmaprcli volume snapshot list -volume viadeavol• Remove snapshotmaprcli volume snapshot remove -snapshotname test-snapshotc3 -volume viadeavol• Preserve snapshotmaprcli volume snapshot preserve -snapshots 256000083
  • 26. NFS• Mount1.List the NFS shares exported on the server:[gpadmin@smdw ~]$ /usr/sbin/showmount -e mdwExport list for mdw:/mapr */mapr/ViadeaCluster *2.Using root to create the directory on smdw:mkdir /mapr3.Mount on smdw:mount mdw:/mapr /mapr4.Change /etc/fstab on smdw:mdw:/mapr /mapr nfs rw 0 0
  • 27. NFS• Setting ChunkSize and Compression for a volume[root@smdw viadeavol]# more .dfs_attributes# lines beginning with # are treated as commentsCompression=trueChunkSize=268435456[root@smdw viadeavol]# hadoop mfs -setchunksize 13107000 /viadeavolsetchunksize: chunksize should be a multiple of 64K[root@smdw viadeavol]# hadoop mfs -setchunksize 13107200 /viadeavol
  • 28. NFS• Setting extension of compressed filemaprcli config save -values {"mapr.fs.nocompression":"bz2,gz,tgz,tbz2,zip,z,Z,mp3,jpg,jpeg,mpg ,mpeg,avi,gif,png"}[gpadmin@mdw viadea]$ maprcli config load -keys mapr.fs.nocompressionmapr.fs.nocompressionbz2,gz,tgz,tbz2,zip,z,Z,mp3,jpg,jpeg,mpg,mpeg,avi,gif,png
  • 29. Managing Data• Dump and Restore Volumes1.Full dump:maprcli volume dump create -e endstate -dumpfile fulldump1 -name viadeavol2.Do change to viadeavol3.Incremental dump:maprcli volume dump create -s endstate -e endstate2 -name viadeavol -dumpfile incrdump14.Full restore:maprcli volume dump restore -name viadeavol_restore -dumpfile fulldump1 -n6.Mount viadeavol_restore7.Incremental restoremaprcli volume dump restore -name viadeavol_restore -dumpfile incrdump1
  • 30. Managing Data• List Disks information[root@mdw]# /opt/mapr/server/mrconfig disk listListDisks resp: status 0 count=1guid 01C7E418-ACC6-4F15-D202-0141CCEE4E00size 20480MBListDisks /data/hdpee/storagefile DG 0: Single SingleDisk50218 Online DG 1: Concat Concat12 Online SP 0: name SP1, Online, size 9874 MB, free 9379 MB, path /data/hdpee/storagefile[root@mdw]# /opt/mapr/server/mrconfig sp listListSPs resp: status 0:1No. of SPs (1), totalsize 9874 MB, totalfree 9379 MBSP 0: name SP1, Online, size 9874 MB, free 9379 MB, path /data/hdpee/storagefile
  • 31. Users and Groups• List entity usage[root@mdw]# maprcli entity listDiskUsage EntityQuota EntityType EntityName VolumeCount EntityAdvisoryquota EntityId EntityEmail0 0 0 gpadmin 0 0 500 gpadmin@viadeamapr.com212 0 0 root 19 0 0 root@viadeamapr.com0 1048576 0 viadea 1 0 666 viadea@viadeamapr.com
  • 32. Users and Groups• Cluster Permissionlogin(including cv): Log in to the Greenplum HD EE Control System, use the API and command-line interface, read access on cluster and volumesss:Start/stop servicescv:Create volumesa:Admin accessfc:Full control (administrative access and permission to change the cluster ACL)
  • 33. Users and Groups• Volume Permissiondump:Dump the volumerestore:Mirror or restore the volumem:Modify volume properties, create and delete snapshotsd:Delete a volumefc:Full control (admin access and permission to change volume ACL)
  • 34. Users and Groups• List ACL[root@mdw conf]# maprcli acl show -type clusterPrincipal Allowed actionsUser root [login, ss, cv, a, fc]User gpadmin [login, ss, cv, a, fc][root@mdw conf]# maprcli acl show -type volume -name viadeavol -user rootPrincipal Allowed actionsUser root [dump, restore, m, d, fc]
  • 35. Users and Groups• Modify ACL for a usermaprcli acl edit -type cluster -user viadea:cvmaprcli acl edit -type cluster -user viadea:amaprcli acl edit -type volume -name viadeavol -user viadea:m• Modify ACL for a whole cluster or volumemaprcli acl set -type volume -name test-volume -user jsmith:dump,restore,m rjones:fc• Setting volume quotummaprcli volume modify -name viadeavol -quota 2G• Setting entity quotummaprcli entity modify -type 0 -name viadea -quota 1T
  • 36. Troubleshooting&Performance Tunning• Small Job(1)mapred-site.xml:<property> <name>mapred.fairscheduler.smalljob.schedule.enable</name> <value>true</value> <description>Enable small job fast scheduling inside fair scheduler. TaskTrackers should reserve a slot called ephemeral slot which is used for smalljob if cluster is busy. </description></property>
  • 37. Troubleshooting&Performance Tunning• Small Job(2)<!-- Small job definition. If a job does not satisfy any of following limits it is not considered as a small job and will be moved out of small job pool.--><property> <name>mapred.fairscheduler.smalljob.max.maps</name> <value>10</value> <description>Small job definition. Max number of maps allowed in small job. </description></property><property> <name>mapred.fairscheduler.smalljob.max.reducers</name> <value>10</value> <description>Small job definition. Max number of reducers allowed in small job. </description></property>
  • 38. Troubleshooting&Performance Tunning• Small Job(3)<property> <name>mapred.fairscheduler.smalljob.max.inputsize</name> <value>10737418240</value> <description>Small job definition. Max input size in bytes allowed for a small job. Default is 10GB. </description></property><property> <name>mapred.fairscheduler.smalljob.max.reducer.inputsize</name> <value>1073741824</value> <description>Small job definition. Max estimated input size for a reducer allowed in small job. Default is 1GB per reducer. </description></property>
  • 39. Troubleshooting&Performance Tunning• Small Job(4)<property> <name>mapred.cluster.ephemeral.tasks.memory.limit.mb</name> <value>200</value> <description>Small job definition. Max memory in mbytes reserved for an ephermal slot. Default is 200mb. This value must be same on JobTracker and TaskTracker nodes. </description></property>
  • 40. Troubleshooting&Performance Tunning• Memory for Greenplum HD EE Services/opt/mapr/conf/warden.confservice.command.tt.heapsize.percent=2 #The percentage of heap space reserved for the TaskTracker.service.command.tt.heapsize.max=325 #The maximum heap space that can be used by the TaskTracker.service.command.tt.heapsize.min=64 #The minimum heap space for use by the TaskTracker.[gpadmin@mdw viadea]$ cat /opt/mapr/conf/warden.conf|grep size|grep percentservice.command.jt.heapsize.percent=10service.command.tt.heapsize.percent=2service.command.hbmaster.heapsize.percent=4service.command.hbregion.heapsize.percent=25service.command.cldb.heapsize.percent=8service.command.mfs.heapsize.percent=20service.command.webserver.heapsize.percent=3service.command.os.heapsize.percent=3
  • 41. Troubleshooting&Performance Tunning• Memory for MapReduce/opt/mapr/hadoop/hadoop-0.20.2/conf/mapred-site.xml<property> <name>mapreduce.tasktracker.reserved.physicalmemory.mb</name> <value></value> <description> Maximum phyiscal memory tasktracker should reserve for mapreduce tasks. If tasks use more than the limit, task using maximum memory will be killed. Expert only: Set this value iff tasktracker should use a certain amount of memory for mapreduce tasks. In MapR Distro warden figures this number based on services configured on a node. Setting mapreduce.tasktracker.reserved.physicalmemory.mb to -1 will disable physical memory accounting and task management. </description></property>
  • 42. Troubleshooting&Performance Tunning• Memory for MapReduceMap tasks MemoryMap tasks use memory mainly in two ways:The application consumes memory to run the map function.The MapReduce framework uses an intermediate buffer to hold serialized (key, value) pairs. (io.sort.mb)/opt/mapr/hadoop/hadoop-0.20.2/conf/mapred-site.xmlio.sort.mbBuffer used to hold map outputs in memory before writing final map outputs.Setting this value very low may cause spills. By default if left empty value is set to 50% of heapsize for map.If a average input to map is "MapIn" bytes then typically value of io.sort.mb should be 1.25 times MapIn bytes.
  • 43. Troubleshooting&Performance Tunning• Memory for MapReduceReduce tasks Memorymapred.reduce.child.java.optsJava opts for the reduce tasks. Default heapsize(-Xmx) is determined by memory reserved for mapreduce at tasktracker.Reduce task is given more memory than map task.Default memory for a reduce task = (Total Memory reserved for mapreduce) * (2*#reduceslots / (#mapslots + 2*#reduceslots))
  • 44. Troubleshooting&Performance Tunning• Tasks number(1)Map slots should be based on how many map tasks can fit in memory, and reduce slots should be based on the number of CPUsmapred.tasktracker.map.tasks.maximum: (CPUS > 2) ? (CPUS * 0.75) : 1 (At least one Map slot, up to 0.75 times the number of CPUs)mapred.tasktracker.reduce.tasks.maximum: (CPUS > 2) ? (CPUS * 0.50) : 1 (At least one Map slot, up to 0.50 times the number of CPUs)variables in formula:CPUS - number of CPUs present on the nodeDISKS - number of disks present on the nodeMEM - memory reserved for MapReduce tasks
  • 45. Troubleshooting&Performance Tunning• Tasks number(2)mapreduce.tasktracker.prefetch.maptasksHow many map tasks should be scheduled in-advance on a tasktracker.To be given in % of map slots. Default is 1.0 which means number of tasks overscheduled = total map slots on TT.
  • 46. Troubleshooting&Performance Tunning• Final&Important : What needs to collect???/opt/mapr/support/tools/mapr-support-collect.sh -n support-output.txt[root@mdw collect]# ls -altr /opt/mapr/support/collect/support-output.txt.tar-rw-r--r-- 1 root root 27607040 Mar 1 22:34 /opt/mapr/support/collect/support- output.txt.tar
  • 47. Troubleshooting&Performance Tunning• What are in the support dump file??1.“cluster” Directory2. Directory for each node• [root@mdw support-output.txt]# ls -altr• total 32• drwxr-xr-x 3 root root 4096 Mar 1 22:19 cluster• drwxr-xr-x 8 root root 4096 Mar 1 22:24 .• drwxr-xr-x 5 root root 4096 Mar 1 22:33 172.28.4.1• drwxr-xr-x 2 root root 4096 Mar 1 22:34 172.28.8.7• drwxr-xr-x 2 root root 4096 Mar 1 22:34 172.28.8.3• drwxr-xr-x 2 root root 4096 Mar 1 22:34 172.28.4.5• drwxr-xr-x 2 root root 4096 Mar 1 22:34 172.28.4.250• drwxr-xr-x 4 root root 4096 Mar 1 22:36 ..
  • 48. Troubleshooting&Performance Tunning• What are in the “cluster” directory?[root@mdw cluster]# cat cluster.txt|grep OutputOutput of /opt/mapr/bin/maprcli node list -jsonOutput of /opt/mapr/bin/maprcli node topo -jsonOutput of /opt/mapr/bin/maprcli node heatmap -view status -jsonOutput of /opt/mapr/bin/maprcli volume list -jsonOutput of /opt/mapr/bin/maprcli dump zkinfo -jsonOutput of /opt/mapr/bin/maprcli config load -jsonOutput of /opt/mapr/bin/maprcli alarm list –json(…)
  • 49. Troubleshooting&Performance Tunning• What are in the “node” directory?(1)“conf” subdirectory: roles, all conf files, disk info,and some other OS commands.“logs” subdirectory:all logs, /var/log/message,some mapr status logs.[root@mdw logs]# cat mfsState.txt|grep OutputOutput of /opt/mapr/server/mrconfig -p 5660 info threadsOutput of /opt/mapr/server/mrconfig -p 5660 info containers resync localOutput of /opt/mapr/bin/maprcli trace dump -port 5660Output of /opt/mapr/bin/maprcli dump fileserverworkinfo -fileserverip 172.28.4.1“pam.d” subdirectory
  • 50. Troubleshooting&Performance Tunning• What are in the “node” directory?(2)MapRBuildVersionredhat-releasesecure.logsysinfo.txt : some output of OS commands[gpadmin@mdw 172.28.4.1]$ cat sysinfo.txt|grep OutputOutput of lscpuOutput of ifconfig -aOutput of uname -aOutput of netstat -anOutput of netstat -rnOutput of hostnameOutput of cat /etc/hostname(…)