0

我正在寻找有关如何在单声道采样器设置中处理新音符的包络重新触发的想法,如果前一个音符的包络尚未完成,则会导致点击。在当前设置中,当一个新的音符被触发(synth.stop 方法调用)时,前一个音符的实例被当场杀死,导致点击,因为信封没有机会完成并达到 0 音量。欢迎任何提示。

我还在下面的代码中添加了我自己不满意的解决方案,将语音增益设置为 0,然后让语音休眠 70 毫秒。这为用户交互引入了 70 毫秒的延迟,但消除了任何点击。睡眠中低于 70 毫秒的任何值都不能解决点击问题。

该变量目前是公共静态的,所以我仍然可以在我调用它们的地方玩耍。

这是我的监听器代码:

buttonNoteC1Get.setOnTouchListener(new View.OnTouchListener() {
        @Override
        public boolean onTouch(View v, MotionEvent event) {

            if (event.getAction() == MotionEvent.ACTION_UP) {
                buttonNoteC1Get.setBackgroundColor(myColorWhite); // reset gui color
                if (sample.getSustainBegin() > 0) { // trigger release for looping sample
                    ampEnv.dataQueue.queue(ampEnvelope, 3, 1); // release called
                }
                limit = 0; // reset action down limiter
                return true;
            }

            if (limit == 0) { // respond only to first touch event
                if (samplePlayer != null) { // check if a previous note exists
                    synth.stop(); // stop instance of previous note
                }
                buttonNoteC1Get.setBackgroundColor(myColorGrey); // key pressed gui color
                samplePitch = octave * 1; // set samplerate multiplier
                Sampler.player(); // call setup code for new note
                Sampler.play(); // play new note
                limit = 1; // prevent stacking of action down touch events
            }
            return false;
        }

    }); // end listener

这是我的采样器代码

public class Sampler {

public static VariableRateDataReader samplePlayer;
public static LineOut lineOut;
public static FloatSample sample;
public static SegmentedEnvelope ampEnvelope;
public static VariableRateMonoReader ampEnv;
public static MixerMonoRamped mixerMono;
public static double[] ampData;
public static FilterStateVariable mMainFilter;

public static Synthesizer synth = JSyn.createSynthesizer(new JSynAndroidAudioDevice());

// load the chosen sample, called by instrument select spinner
static void loadSample(){
    SampleLoader.setJavaSoundPreferred(false);
    try {
        sample = SampleLoader.loadFloatSample(sampleFile);
    } catch (IOException e) {
        e.printStackTrace();
    }
} // end load sample


// initialize sampler voice
static void player() {

 // Create an amplitude envelope and fill it with data.
 ampData = new double[] {
         envA, 0.9, //  pair 0, "attack"
         envD, envS, // pair 2, "decay"
         0, envS, // pair 3, "sustain"
         envR, 0.0, // pair 4, "release"
        /* 0.04, 0.0 // pair 5, "silence"*/

 };

    // initialize voice
    ampEnvelope = new SegmentedEnvelope(ampData);
    synth.add(ampEnv = new VariableRateMonoReader());
    synth.add(lineOut = new LineOut());
    synth.add(mixerMono = new MixerMonoRamped(2));
    synth.add(mMainFilter = new FilterStateVariable());

    // connect signal flow
    mixerMono.output.connect(mMainFilter.input);
    mMainFilter.output.connect(0, lineOut.input, 0);
    mMainFilter.output.connect(0, lineOut.input, 1);

    // set control values
    mixerMono.amplitude.set(sliderVal / 100.0f);
    mMainFilter.amplitude.set(0.9);
    mMainFilter.frequency.set(mainFilterCutFloat);
    mMainFilter.resonance.set(mainFilterResFloat);

    // initialize and connect sampler voice
 if (sample.getChannelsPerFrame() == 1) {
     synth.add(samplePlayer = new VariableRateMonoReader());
     ampEnv.output.connect(samplePlayer.amplitude);
     samplePlayer.output.connect(0, mixerMono.input, 0);
     samplePlayer.output.connect(0, mixerMono.input, 1);
 } else if (sample.getChannelsPerFrame() == 2) {
     synth.add(samplePlayer = new VariableRateStereoReader());
     ampEnv.output.connect(samplePlayer.amplitude);
     samplePlayer.output.connect(0, mixerMono.input, 0);
     samplePlayer.output.connect(1, mixerMono.input, 1);
 } else {
     throw new RuntimeException("Can only play mono or stereo samples.");
 }

} // end player

// play the sample
public static void play() {

    if (samplePlayer != null)
    {samplePlayer.dataQueue.clear();
        samplePlayer.rate.set(sample.getFrameRate() * samplePitch); // set pitch
    }

    // start the synth engine
    synth.start();
    lineOut.start();
    ampEnv.start();

   // play one shot sample
    if (sample.getSustainBegin() < 0) {
        samplePlayer.dataQueue.queue(sample);
        ampEnv.dataQueue.queue( ampEnvelope );

    // play sustaining sample
    } else {
        samplePlayer.dataQueue.queueOn(sample);
        ampEnv.dataQueue.queue( ampEnvelope, 0,3);
        ampEnv.dataQueue.queueLoop( ampEnvelope, 1, 2 );
    }
} } 

引入 70 毫秒延迟的不令人满意的解决方案,将先前注释的动作关闭侦听器处理更改为:

 if (limit == 0) {
                if (samplePlayer != null) {
                    mixerMono.amplitude.set(0);
                    try {
                        synth.sleepFor(0.07);
                        synth.stop(); // stop instance of previous note
                    }catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                }
4

1 回答 1

0

你不应该为每个音符调用 synth.start() 和 synth.stop()。把它想象成打开物理合成器。只需启动合成器和 lineOut 一次。如果 ampEnv 间接连接到启动()的其他东西,那么您不需要启动() ampEnv。

然后,当您想开始注释时,只需将您的样本和信封排队即可。

当你演奏完音符后,调用 synth.stop()。

于 2018-05-26T05:03:09.573 回答