4

使用旧答案在 tcl 中搜索文件: https ://stackoverflow.com/a/435094/984975

首先让我们讨论一下我现在在做什么:使用此功能:(感谢 Jacson)

# findFiles
# basedir - the directory to start looking in
# pattern - A pattern, as defined by the glob command, that the files must match
proc findFiles { basedir pattern } {

    # Fix the directory name, this ensures the directory name is in the
    # native format for the platform and contains a final directory seperator
    set basedir [string trimright [file join [file normalize $basedir] { }]]
    set fileList {}

    # Look in the current directory for matching files, -type {f r}
    # means ony readable normal files are looked at, -nocomplain stops
    # an error being thrown if the returned list is empty
    foreach fileName [glob -nocomplain -type {f r} -path $basedir $pattern] {
        lappend fileList $fileName
    }

    # Now look for any sub direcories in the current directory
    foreach dirName [glob -nocomplain -type {d  r} -path $basedir *] {
        # Recusively call the routine on the sub directory and append any
        # new files to the results
        set subDirList [findFiles $dirName $pattern]
        if { [llength $subDirList] > 0 } {
            foreach subDirFile $subDirList {
                lappend fileList $subDirFile
            }
        }
    }
    return $fileList
 }

并调用以下命令:

findFiles some_dir_name *.c

当前结果:

bad option "normalize": must be atime, attributes, channels, copy, delete, dirname, executable, exists, extension, isdirectory, isfile, join, lstat, mtime, mkdir, nativename, owned, pathtype, readable, readlink, rename, rootname, size, split, stat, tail, type, volumes, or writable

现在,如果我们运行:

glob *.c

我们得到很多文件,但它们都在当前目录中。

目标是获取机器上所有子文件夹中的所有文件及其路径。有谁能帮忙吗?

我真正想做的是找到 *.c 文件数量最多的目录。但是,如果我可以列出所有文件及其路径,我就可以计算每个目录中有多少文件,并获得数量最多的文件。

4

3 回答 3

4

您使用的是旧版本的 Tcl。[file normalize]大约在 2002 年左右在 Tcl 8.4 中引入。已经升级了。

如果你不能 - 那么你使用 glob 但只为文件调用一次,然后遍历目录。查看glob -types选项。

这是一个演示:

proc on_visit {path} {
    puts $path
}

proc visit {base glob func} {
    foreach f [glob -nocomplain -types f -directory $base $glob] {
        if {[catch {eval $func [list [file join $base $f]]} err]} {
            puts stderr "error: $err"
        }
    }
    foreach d [glob -nocomplain -types d -directory $base *] {
        visit [file join $base $d] $glob $func
    }
}

proc main {base} {
    visit $base *.c [list on_visit]
}

main [lindex $argv 0]
于 2012-06-19T19:20:56.883 回答
4

我会使用::fileutil::traverse函数来做到这一点。

就像是:

package require ::fileutil::traverse

proc check_path {path} {
     string equal [file extension $path] ".c"
}

set obj [::fileutil::traverse %AUTO% -filter check_path]
array set pathes {}
$obj foreach file {
     if {[info exists pathes([file dirname $file])]} {
        incr pathes([file dirname $file])
     } else {
        set pathes([file dirname $file]) 1
     }
}

# print pathes and find the biggest
foreach {name value} [array get pathes] {
     puts "$name : $value"
}
于 2012-06-19T19:30:52.233 回答
2

对于快速(1 级)文件模式匹配,请使用:

glob **/*.c

如果要递归搜索,请使用:

proc ::findFiles { baseDir pattern } {
  set dirs [ glob -nocomplain -type d [ file join $baseDir * ] ]
  set files {}
  foreach dir $dirs { 
    lappend files {*}[ findFiles $dir $pattern ] 
  }
  lappend files {*}[ glob -nocomplain -type f [ file join $baseDir $pattern ] ] 
  return $files
}

puts [ join [ findFiles $basepath "*.tcl" ] \n ]
于 2012-10-12T03:55:35.137 回答