我有一个返回字节数组的 Web 服务,我的目的是将所述数组转换为客户端中的 .wav(手持设备,如 Blackberry)。但是我真的不知道该怎么做,我尝试只制作一个 FileOutputStream 但当然不会播放。所以我又一次不知道该怎么办了。有任何想法吗?
问问题
14384 次
2 回答
2
所以,有很多 .WAV 格式,这里有一些文档:
- http://en.wikipedia.org/wiki/WAV
- http://ccrma.stanford.edu/courses/422/projects/WaveFormat/(注意字节序变化)
- http://www.lightlink.com/tjweber/StripWav/WAVE.html
它不仅仅是一个数据字节流,而且很接近......只是一点标题,你应该很好。
我想你也可以使用类似http://java.sun.com/j2se/1.5.0/docs/api/javax/sound/sampled/spi/AudioFileWriter.html
于 2009-04-08T18:05:11.080 回答
0
*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package bemukan.voiceRecognition.speechToText;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.IOException;
import javax.sound.sampled.AudioFileFormat;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.Clip;
import javax.sound.sampled.LineUnavailableException;
import javax.sound.sampled.UnsupportedAudioFileException;
/**
*
* @author MuhammedYC
*/
public class SplitAudio {
private int BUFFER_LENGTH=1024;
private double startTime;
private double endTime;
private File sourceFile;
public SplitAudio(File sourceFile,int startTime,int endTime){
this.startTime=startTime;
this.endTime=endTime;
this.sourceFile = sourceFile;
AudioInputStream inputAIS = null;
try {
inputAIS = AudioSystem.getAudioInputStream(sourceFile);
Clip clip = AudioSystem.getClip();
clip.open(inputAIS);
long totalMicroSecond = clip.getMicrosecondLength();
} catch (UnsupportedAudioFileException e) {
} catch (IOException e) {
} catch (LineUnavailableException e) {
}
}
public void splitAudio(){
File outputFile = new File("a.wav");
AudioFileFormat fileFormat = null;
try {
fileFormat = AudioSystem.getAudioFileFormat(sourceFile);
AudioFileFormat.Type targetFileType = fileFormat.getType();
AudioFormat audioFormat = fileFormat.getFormat();
AudioInputStream inputAIS = AudioSystem.getAudioInputStream(sourceFile);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int nBufferSize = BUFFER_LENGTH * audioFormat.getFrameSize();
byte[] abBuffer = new byte[nBufferSize];
while (true) {
int nBytesRead = inputAIS.read(abBuffer);
if (nBytesRead == -1) {
break;
}
baos.write(abBuffer, 0, nBytesRead);
}
/* Here's the byte array everybody wants.
*/
byte[] abAudioData = baos.toByteArray();
// double baslangic = abBuffer.length * oranBaslangic;
// double bitis = abBuffer.length * oranSon;
byte[] splittedAudio = new byte[(int) (endTime - startTime)];
for (int i = 0; i < (int) (endTime- startTime); i++) {
splittedAudio[i] = abAudioData[i + (int) startTime];
}
ByteArrayInputStream bais = new ByteArrayInputStream(splittedAudio);
AudioInputStream outputAIS = new AudioInputStream(bais, audioFormat,
splittedAudio.length / audioFormat.getFrameSize());
AudioSystem.write(outputAIS, targetFileType, outputFile);
} catch (UnsupportedAudioFileException e) {
} catch (IOException e) {
}
}
}
于 2011-04-27T17:13:55.310 回答