问题标签 [scp]

For questions regarding programming in ECMAScript (JavaScript/JS) and its various dialects/implementations (excluding ActionScript). Note JavaScript is NOT the same as Java! Please include all relevant tags on your question; e.g., [node.js], [jquery], [json], [reactjs], [angular], [ember.js], [vue.js], [typescript], [svelte], etc.

0 投票
2 回答
10797 浏览

bash - 用于 scp 远程服务器目录中最新文件的 Bash 脚本

好的,所以我有点知道如何使用 find then cp 命令在本地执行此操作,但不知道如何使用 scp 远程执行相同操作。

所以知道这个:

该目标目录将充满数据库备份,我如何告诉它找到最新的备份,并在本地 scp 呢?

0 投票
3 回答
3456 浏览

php - 使用 PHP exec 命令和 scp 复制

我想使用 SCP 将一个 zip 文件从远程复制到我的本地系统。我有一个 php 文件,我在其中使用 php 函数 exec(); 如果我像http://www.abc.com/upload.php一样运行 upload.php 。zip 文件应该复制到我的本地 linux 文件夹我的路径是 /var/www/html/mydirectory/ 我该怎么做?

0 投票
2 回答
4168 浏览

iphone - 从 iphone 上的 sftp 服务器下载/上传文件

我需要连接到 sftp/scp 服务器下载文件,编辑它然后重新上传。

据我所知,SKD 本身并不能让您通过 ftp 进行安全连接。

有任何想法吗?

0 投票
2 回答
3295 浏览

python - Python:执行 scp,密码的标准输入不起作用

我正在尝试以下

但是我仍然收到密码提示,并且没有发送密码。我知道我可以将 scp 与键一起使用,但这不是我需要的。

有什么帮助吗?

0 投票
2 回答
1623 浏览

ruby - Ruby 和 Net::SCP 传输(套接字)的性能问题

从命令行 scp 实用程序的能力来看,库中的 SCP 上传速度似乎受到很大限制。我知道这是 Ruby (1.9.2-p0),但Net::SCP 比 Linux 实用程序慢大约 8 倍(使用大文件看到......见下文)。我很想知道(我快速浏览了代码)这是 Ruby 中的套接字,还是可以更好地多路复用 Net::SCP 套接字?

我注意到无论我尝试哪种上传方式(串行上传、异步操作的通道、使用 scp 对象的多个实例),我都无法在 SCP 上传上获得超过 9 兆字节/秒的传输速度。现在......让我解释一下我的调查细节:

1)尝试不同的加密算法

我使用了不同类型的加密,速度没有太大变化 示例:我可以使用命令行 scp(加密算法 = arcfour128)提交我的 1GB 测试文件,并在我的内部千兆位连接上获得 73.3 兆字节/秒的传输速率。我使用 Net::SCP.upload 库在我的内部千兆连接上从未获得超过大约 9 兆字节/秒。

2) 尝试不同的主机/操作系统

我发现 Linux -> Linux 上传速度最快。SUA 的 ssh 服务器 (Windows) 只能为我提供最高 13.5 兆字节/秒的上传速度(Linux -> Windows,使用带有 scp 命令行的 arcfour 算法),而 Linux -> Linux(使用 arcfour,带有 scp 命令行) ) 是惊人的 73.3 兆字节/秒。我应该提一下,这些 Windows 和 Linux 机器是完全相同的型号、硬件等。

3) 尝试了不同的 SCP 上传方法

-> 用了2个同步上传!电话,一个接一个打完。-> 使用了 2 个异步上传调用,一个接一个地开始了 -> 使用了 2 个 Net::SCP 对象并将文件提交到上传的非阻塞/异步版本(因此它们并行运行) 这些不同的方法都不是给予任何显着的性能提升,这有点令人沮丧。

以下是测试结果(文本增强了可读性,但类似于所提供代码的输出):

如果您有一个大文件(我使用了一个 ~1GB 文件),您可以使用这些 rspec 测试(在 scp_spec.rb 中)或将它们更改为您熟悉的任何测试工具来查看这种性能下降。

如果您不知道如何在库中提高这种性能,那么除了通过子外壳调用 scp 实用程序之外,您对如何打开一些额外的 SCP 传输并行速度有更多想法吗?

Rspec 测试在这里:https ://gist.github.com/703966

0 投票
1 回答
2377 浏览

linux - Linux: uploading unfinished files - with file size check (scp/rsync)

I typically end up in the following situation: I have, say, a 650 MB MPEG-2 .avi video file from a camera. Then, I use ffmpeg2theora to convert it into Theora .ogv video file, say some 150 MB in size. Finally, I want to upload this .ogv file to an ssh server.

Let's say, the ffmpeg2theora encoding process takes some 15 minutes on my PC. On the other hand, the upload goes on with a speed of about 60 KB/s, which takes some 45 minutes (for the 150MB .ogv). So: if I first encode, and wait for the encoding process to finish - and then upload, it would take approximately

to complete the operation.

So, I thought it would be better if I could somehow start the upload, in parallel with the encoding operation; then, in principle - as the uploading process is slower (in terms of transferred bytes/sec) than the encoding one (in terms of generated bytes/sec) - the uploading process would always "trail behind" the encoding one, and so the whole operation (enc+upl) would complete in just 45 minutes (that is, just the time of the upload process +/- some minutes depending on actual upload speed situation on wire).

My first idea was to pipe the output of ffmpeg2theora to tee (so as to keep a local copy of the .ogv) and then, pipe the output further to ssh - as in:

While this command does, indeed, function - one can easily see in the running log in the terminal from ffmpeg2theora, that in this case, ffmpeg2theora calculates a predicted time of completion to be 1 hour; that is, there seems to be no benefit in terms of smaller completion time for both enc+upl. (While it is possible that this is due to network congestion, and me getting less of a network speed at the time - it seems to me, that ffmpeg2theora has to wait for an acknowledgment for each little chunk of data it sends through the pipe, and that ACK finally has to come from ssh... Otherwise, ffmpeg2theora would not have been able to provide a completion time estimate. Then again, maybe the estimate is wrong, while the operation would indeed complete in 45 mins - dunno, never had patience to wait and time the process; I just get pissed at 1hr as estimate, and hit Ctrl-C ;) ...)

My second attempt was to run the encoding process in one terminal window, i.e.:

..., and the uploading process, using scp, in another terminal window (thereby 'forcing' 'parallelization'):

The problem here is: let's say, at the time when scp starts, ffmpeg2theora has already encoded 5 MB of the output .ogv file. At this time, scp sees this 5 MB as the entire file size, and starts uploading - and it exits when it encounters the 5 MB mark; while in the meantime, ffmpeg2theora may have produced additional 15 MB, making the .ogv file 20 MB in total size at the time scp has exited (finishing the transfer of the first 5 MB).

Then I learned (joen.dk » Tip: scp Resume) that rsync supports 'resume' of partially completed uploads, as in:

..., so I tried using rsync instead of scp - but it seems to behave exactly the same as scp in terms of file size, that is: it will only transfer up to the file size read at the beginning of the process, and then it will exit.

So, my question to the community is: Is there a way to parallelize the encoding and uploading process, so as to gain the decrease in total processing time?

I'm guessing there could be several ways, as in:

  • A command line option (that I haven't seen) that forces scp/rsync to continuously check the file size - if the file is open for writing by another process (then I could simply run the upload in another terminal window)
  • A bash script; say running rsync --partial in a while loop, that runs as long as the .ogv file is open for writing by another process (I don't actually like this solution, since I can hear the harddisk scanning for the resume point, every time I run rsync --partial - which, I guess, cannot be good; if I know that the same file is being written to at the same time)
  • A different tool (other than scp/rsync) that does support upload of a "currently generated"/"unfinished" file (the assumption being it can handle only growing files; it would exit if it encounters that the local file is suddenly less in size than the bytes already transferred)

... but it could also be, that I'm overlooking something - and 1hr is as good as it gets (in other words, it is maybe logically impossible to achieve 45 min total time - even if trying to parallelize) :)

Well, I look forward to comments that would, hopefully, clarify this for me ;)

Thanks in advance,
Cheers!

0 投票
2 回答
752 浏览

perl - 在远程服务器(scp)中没有帐户的安全上传

我正在寻找某种方法来构建某个脚本。

  • 有一些 (Linux) 用户 A、B 和 C 将图像扫描到 $HOME/images/scan
  • 他们应该将这些照片上传到他们没有帐户的远程服务器。
  • 因此是虚拟用户 X,他在本地和远程机器上都有帐户,但不能直接访问用户的主目录。
  • 它们都有共同的组“图像”,并且用户的扫描目录对于该组是可读的。

所以我想找到一种方法,用户如何运行一个脚本,该脚本使用远程服务器上的 X 权限和帐户将图片上传到远程服务器。我为此制作了一个 rsa 密钥,并将其添加到远程服务器授权密钥文件中。对于用户 X,一切正常。

我尝试了一些 setgid/setuid perl-scripts,但它们无法以用户 X 权限运行 scp,而且它们也不使用它的 rsa-key。像这个例子:

所以我正在寻找其他方法来满足我的需求。

先感谢您!

Kõike hääd,
WK

0 投票
2 回答
1181 浏览

php - PHP ssh2_scp_send 文件权限

我正在使用 PHP 函数ssh2_scp_send将文件从一台服务器传输到另一台服务器。有趣的是,如果我直接以八进制形式(即 0644)编写许可,一切正常。如果我改为将其括在引号中或使用变量,这将不再起作用。

更清楚地说:这有效:ssh2_scp_send($conn, $localFile, $remoteFile, 0644);

不工作:ssh2_scp_send($conn, $localFile, $remoteFile, "0644");

不工作:$permission=0644;ssh2_scp_send($conn, $localFile, $remoteFile, $permission);

有人知道为什么会这样吗?

0 投票
1 回答
2592 浏览

ssh - 如何通过 ssh 连接到多个防火墙后面的远程服务器?

这是我的情况

  • 我可以通过 ssh 从我的家用笔记本电脑访问服务器 A。
  • 服务器 B 只能通过 ssh 从服务器 A 访问。
  • 服务器 C 只能通过 ssh 从服务器 B 访问。

无论如何,我可以配置我的 .ssh/config 以便我可以直接从我的笔记本电脑 ssh 到服务器 C 吗?我需要这个,因为我需要定期将文件从服务器 C 传输回我的笔记本电脑。我正在使用“scp”,但手动浏览这个 ssh 层次结构太痛苦了。我想知道是否有更直接的方法可以通过 ssh 的魔力做到这一点。

0 投票
1 回答
5696 浏览

linux - 如何将文件从通过 SSH 连接的计算机移动到 VPS?

我知道 scp 经常用于在服务器之间移动文件,但我不确定如何将我的计算机作为服务器引用。如果 scp 是执行此操作的正确命令,那么我错过了什么?