我制作了一个应用程序,它有一个 Expo React 前端和一个通过 Flask API 连接的 Python 后端。这里的目标是通过 HTTP 从 Expo 应用程序将音频记录发送到将分析音频文件的 python 后端。
以下是一些可能有用的代码摘录。他在这里的完整代码。
let recording = new Audio.Recording();
const FLASK_BACKEND = "http://10.42.224.223:8000/getResult";
export default function App() {
const [data, setData] = useState(' ')
const [text, setText] = React.useState("");
useEffect(() => {
fetch(FLASK_BACKEND).then(res =>
res.json()
).then (
data => {
setData(data)
console.log(data)
}
)
}, [])
const [imageUri, setImageUri] = useState(
'https://media.npr.org/assets/img/2021/08/11/gettyimages-1279899488_wide-f3860ceb0ef19643c335cb34df3fa1de166e2761-s1100-c50.jpg'
);
const displayResult = async () => {
//setImageUri('https://miro.medium.com/max/2000/1*V2mgZ7y0ngd3q4DZ01xkEQ.png');
setImageUri(data["url"])
console.log(data["result"]);
setAnalysisText(data["result"]);
}
const stopRecording = async () => {
try {
await recording.stopAndUnloadAsync();
const result = recording.getURI();
SetRecordedURI(result); // Here is the URI
recording = new Audio.Recording();
setRecordButtonColor('black');
SetisRecording(false);
console.log("Recording saved at: ", result)
console.warn("Voice Recorded! Click Analyze to get your results.")
try {
const response = await FileSystem.uploadAsync(
FLASK_BACKEND,
result
);
const body = JSON.stringify(response.body);
setText(body.text);
} catch (err) {
console.error(err);
}
// recordedText = "Voice Recorded! Click Next to Continue."
} catch (error) {
console.log(error);
}
};
按钮是这样定义的。
<CustomButton
text ='Analyze'
color = 'green'
textColor = 'white'
onPress = {() => displayResult()}
/>
现在,后端代码看起来像这样,它获取音频文件来分析它。
from flask import Flask
from flask import request
from flask import Response
from flask_cors import CORS
from pprint import pprint
import json
import datetime
import speech_recognition as sr
app = Flask(__name__)
CORS(app)
def speech_to_text(audio_file):
r = sr.Recognizer()
# open the file
with sr.AudioFile(audio_file) as source:
# listen for the data (load audio to memory)
audio_data = r.record(source)
# recognize (convert from speech to text)
text = r.recognize_google(audio_data)
return str(text)
# Members for API Route
myURL = 'https://miro.medium.com/max/2000/1*V2mgZ7y0ngd3q4DZ01xkEQ.png'
# myString = "This is a test return"
@app.route("/getResult", methods = ['GET','POST'])
def get_result():
data = request.get_data()
data_length = request.content_length
now = str(datetime.datetime.now())
filename = now + ".wav"
with open(filename, mode='bx') as f:
f.write(data)
# when lines are uncommented the scripts don't return anything.
conv_text = speech_to_text(filename)
myString = conv_text + " was saved!"
# myString = filename + " was saved!"
print(myString)
# print("Processing data: ", str(data)[:20])
# print(type(data))
return {"result": myString, "url": myURL}
## return "This is a test return from the Flask API"
if __name__ == "__main__":
app.run(host = "0.0.0.0", port = "8000", debug=True)
问题是后端在录制完成之前接收文件并且它停止了。如何使记录保留在我的本地目录中,直到我单击按钮,然后才将文件发送到 Python 后端进行分析,烧瓶后端进行分析并将结果发回。
任何帮助,将不胜感激。