If开发者_如何学C a data block is replicated, in which data node will it be replicated to? Is there any tool to show where the replicated blocks are present?
If you know the filename, you can look this up through the DFS browser.
Go to your namenode web interface, say "browse the filesystem" and navigate to the file you're interested in. In the bottom of the page, there will be a list of all blocks in the file, and where each of those blocks is located.
NOTE: It looks like this when you click on an actual file within the HDFS filesystem.
Alternatively, you could run:
hadoop fsck / -files -blocks -locations
Which will report on all blocks and all their locations.
There is a nice tool that was open-sourced by CERN - see blog article https://db-blog.web.cern.ch/blog/daniel-lanza-garcia/2016-04-tool-visualise-block-distribution-hadoop-hdfs-cluster
It would show you not only block locations across nodes, but also across disks on those nodes (tabular view):
Code for this project can be found here: https://github.com/cerndb/hdfs-metadata
Internally this CERN's tool uses API calls to Hadoop - see for example, https://github.com/cerndb/hdfs-metadata/blob/master/src/main/java/ch/cern/db/hdfs/DistributedFileSystemMetadata.java#L168
so it's much faster than using cli tools if you're planning to run this on many files for example and then see consolidated results.
hdfs fsck / -files -blocks -locations
allows you to see only one file at a time.
We use this tool to see if a huge parquet table is distributed nicely across nodes and disks, to check if data processing skew happens not because of data distribution flaws.
精彩评论