在ios设备中,如何处理并播放blob音频数据?

新手上路,请多包涵

在ios设备上,播放blob音频数据,如果blobQueue.current.shift()取到的blob数据太多,audio播放的音频时间长,会出现播放的过程中突然没有声音的问题,并且进度播完的时候,不会触发onEnded

import { useEventBus } from 'event-bus-hooks';
import { useSelector } from 'react-redux';

const IosAudio = () => {
  const player = useRef(null);
  const blobQueue = useRef([]); // 用于存储 Blob 的队列
  const currentIndex = useRef(0);

  const handlePlay = () => {
    if (currentIndex.current > blobQueue.current.length - 1) {
      currentIndex.current = 0;
      return;
    }
    console.log('------------ 下一个 -------------');
    let obj = blobQueue.current.shift();
    console.log(obj, obj.length);
    const blob = new Blob(obj, { type: 'audio/mp3' }); // 合并队列中的 Blob
    player.current.src = URL.createObjectURL(blob);
    player.current.load();
  };

  // 接收二进制的音频数据(Blob 格式),并处理
  useEventBus('Global/tts_BlobList', (blobDataArr) => {
    blobQueue.current.push(blobDataArr);
  });

  const handleStop = () => {
    if (!player.current) return;
    blobQueue.current = [];
    player.current.pause();
    // 释放资源
    URL.revokeObjectURL(player.current.src);
  };

  const loadedmetadata = () => {
    console.log('音频元数据加载完成');
    player.current
      .play()
      .then(() => {
        console.log('播放成功');
      })
      .catch((error) => {
        console.error('播放失败:', error);
      });
  };

  // 播放结束
  const onended = () => {
    currentIndex.current++;
    handlePlay();
  };

  // 监听播放和暂停
  useEventBus('Global/audio-play', ({ type }) => {
    if (type === 'start') {
      handlePlay();
    } else if (type === 'end') {
      handleStop();
    }
  });

  const isMute = useSelector((state) => state.chat.isMute);
  useEffect(() => {
    player.current.muted = !!isMute;
  }, [isMute]);

  return (
    <audio
      ref={player}
      src=""
      onLoadedMetadata={loadedmetadata}
      onEnded={onended}
      onError={(e) => console.error('数字人播放音频错误:', e.message)}
      controls
    />
  );
};

export default IosAudio;

测试过程中,音频时长超过17秒,就会出现问题,有没有大佬懂这是什么原因,是因为内存原因么

阅读 1.2k
avatarAI BotBETA
### 回答

在iOS设备上处理并播放较长的Blob音频数据时,出现的问题可能由几个因素导致,包括内存管理、音频解码性能以及音频播放器的状态管理。以下是一些可能的解决方案和建议:

1. **分片播放音频**:
   - 鉴于一次性加载并播放较长的音频Blob可能导致内存不足或解码性能问题,建议将音频数据分成较小的片段,并逐个播放。这可以通过调整生成Blob数据的逻辑来实现,确保每个Blob片段的时长在一个合理的范围内(例如,不超过10秒)。

2. **优化Blob处理**:
   - 在处理Blob数据时,确保数据被正确合并和转换。在`handlePlay`函数中,`let obj = blobQueue.current.shift();` 这行代码可能会取出整个队列的Blob数据(如果`blobQueue.current`包含多个数组元素,且每个元素本身是一个Blob数组)。确保`blobQueue.current`中的每个元素都是一个独立的Blob,或者正确地将多个Blob合并成一个。

3. **重置播放器状态**:
   - 在每次播放新的Blob之前,确保完全重置音频播放器的状态。这包括停止当前播放、清理旧的URL对象、重新加载新的Blob等。

4. **监听错误事件**:
   - 加强错误处理逻辑,监听音频播放器的`onError`事件,以便在播放失败时获取更多错误信息。

5. **内存和资源管理**:
   - 在`handleStop`函数中,除了暂停播放和释放URL对象外,还可以考虑释放其他与音频播放相关的资源,如清空`blobQueue`中的Blob数据(如果它们不再需要)。

6. **避免内存泄漏**:
   - 确保在组件卸载时清理所有事件监听器和资源,以避免内存泄漏。

7. **使用Web Audio API**:
   - 对于更复杂的音频处理需求,可以考虑使用Web Audio API。它提供了更强大的音频处理和控制能力,可能更适合处理长音频和实时音频数据。

8. **iOS特定问题**:
   - iOS设备上的Safari浏览器对Blob URL的处理可能与其他浏览器有所不同。确保测试你的代码在iOS设备上的表现,并查阅相关的iOS开发文档和社区讨论,以获取针对iOS特定问题的解决方案。

### 结论

问题可能由内存管理不当、音频解码性能问题或音频播放器状态管理不当导致。建议分片播放音频、优化Blob处理、重置播放器状态、加强错误处理、管理内存和资源、避免内存泄漏,并考虑使用Web Audio API来处理更复杂的音频需求。同时,确保测试代码在iOS设备上的表现,并查阅相关的iOS开发文档。
1 个回答

重叠分片试试:

const IosAudio = () => {
  const player = useRef(null);
  const blobQueue = useRef([]);
  const audioContext = useRef(null);
  const nextAudioBuffer = useRef(null);  
  const isTransitioning = useRef(false); 
  const maxDuration = 15;

  useEffect(() => {
    audioContext.current = new (window.AudioContext || window.webkitAudioContext)();
    return () => {
      if (audioContext.current) {
        audioContext.current.close();
      }
    };
  }, []);

  const processAudioBlob = async (blobData) => {
    try {
      const audioBuffer = await audioContext.current.decodeAudioData(
        await blobData.arrayBuffer()
      );
      
      // 重叠分片
      const overlap = 0.1; // 100ms重叠
      const numberOfChunks = Math.ceil(audioBuffer.duration / (maxDuration - overlap));
      
      for (let i = 0; i < numberOfChunks; i++) {
        const startTime = i * (maxDuration - overlap);
        const endTime = Math.min((i + 1) * maxDuration, audioBuffer.duration);
        
        const chunkBuffer = audioContext.current.createBuffer(
          audioBuffer.numberOfChannels,
          ((endTime - startTime) * audioBuffer.sampleRate) | 0,
          audioBuffer.sampleRate
        );
        
        // 淡入淡出
        for (let channel = 0; channel < audioBuffer.numberOfChannels; channel++) {
          const channelData = audioBuffer.getChannelData(channel);
          const chunkData = chunkBuffer.getChannelData(channel);
          
          for (let j = 0; j < chunkBuffer.length; j++) {
            const originalIndex = ((startTime * audioBuffer.sampleRate) | 0) + j;
            if (originalIndex < audioBuffer.length) {
              let gain = 1;
              
              // 淡入
              if (j < audioBuffer.sampleRate * 0.02) { // 20ms淡入
                gain = j / (audioBuffer.sampleRate * 0.02);
              }
              // 淡出
              else if (j > chunkBuffer.length - audioBuffer.sampleRate * 0.02) {
                gain = (chunkBuffer.length - j) / (audioBuffer.sampleRate * 0.02);
              }
              
              chunkData[j] = channelData[originalIndex] * gain;
            }
          }
        }
        
        const chunk = await audioBufferToBlob(chunkBuffer);
        blobQueue.current.push(chunk);
      }
    } catch (error) {
      console.error('音频处理错误:', error);
    }
  };

  const preloadNextAudio = async () => {
    if (blobQueue.current.length > 0) {
      const nextBlob = blobQueue.current[0];
      const arrayBuffer = await nextBlob.arrayBuffer();
      nextAudioBuffer.current = await audioContext.current.decodeAudioData(arrayBuffer);
    }
  };

  const handlePlay = async () => {
    if (!blobQueue.current.length || isTransitioning.current) return;
    
    try {
      isTransitioning.current = true;
      const currentBlob = blobQueue.current.shift();
      const url = URL.createObjectURL(currentBlob);
      
      if (player.current.src) {
        URL.revokeObjectURL(player.current.src);
      }
      
      player.current.src = url;
      await player.current.load();
      await player.current.play();
      
      // 预加载下一段声音
      preloadNextAudio();
      
    } catch (error) {
      console.error('播放错误:', error);
      isTransitioning.current = false;
      handlePlay();
    }
  };

  const onended = () => {
    isTransitioning.current = false;
    if (blobQueue.current.length > 0) {
      
      setTimeout(() => {
        handlePlay();
      }, 10);
    }
  };

  // 优化事件监听
  useEventBus('Global/tts_BlobList', async (blobDataArr) => {
    const blob = new Blob(blobDataArr, { type: 'audio/mp3' });
    await processAudioBlob(blob);
    
    if (!isTransitioning.current && player.current?.paused) {
      handlePlay();
    }
  });

  // ... 其他代码保持不变 ...

  return (
    <audio
      ref={player}
      onLoadedMetadata={loadedmetadata}
      onEnded={onended}
      onError={(e) => {
        console.error('音频播放错误:', e);
        isTransitioning.current = false;
        handlePlay();
      }}
      controls
    />
  );
};

加分片处理试试我设置的是15s,优化一下内存,还有错误处理,试试能不能解决。

const IosAudio = () => {
  const player = useRef(null);
  const blobQueue = useRef([]);
  const currentIndex = useRef(0);

  const [isPlaying, setIsPlaying] = useState(false);
  const maxDuration = 15; 

 
  const processAudioBlob = async (blobData) => {
    try {
      const audioBuffer = await blobData.arrayBuffer();
      const audioContext = new (window.AudioContext || window.webkitAudioContext)();
      const buffer = await audioContext.decodeAudioData(audioBuffer);
      
      // 按最大时长分片
      const numberOfChunks = Math.ceil(buffer.duration / maxDuration);
      for (let i = 0; i < numberOfChunks; i++) {
        const startTime = i * maxDuration;
        const endTime = Math.min((i + 1) * maxDuration, buffer.duration);
        
        const chunkBuffer = audioContext.createBuffer(
          buffer.numberOfChannels,
          (endTime - startTime) * buffer.sampleRate,
          buffer.sampleRate
        );
        
        // 复制
        for (let channel = 0; channel < buffer.numberOfChannels; channel++) {
          const channelData = buffer.getChannelData(channel);
          const chunkData = chunkBuffer.getChannelData(channel);
          const startSample = Math.floor(startTime * buffer.sampleRate);
          const endSample = Math.floor(endTime * buffer.sampleRate);
          
          for (let j = startSample; j < endSample; j++) {
            chunkData[j - startSample] = channelData[j];
          }
        }
        
        const chunk = await audioBufferToBlob(chunkBuffer);
        blobQueue.current.push(chunk);
      }
    } catch (error) {
      console.error('音频处理错误:', error);
    }
  };

  const handlePlay = async () => {
    if (!blobQueue.current.length || !player.current) return;
    
    try {
      setIsPlaying(true);
      const blob = blobQueue.current.shift();
      const url = URL.createObjectURL(blob);
      
      if (player.current.src) {
        URL.revokeObjectURL(player.current.src);
      }
      
      player.current.src = url;
      await player.current.load();
      await player.current.play();
    } catch (error) {
      console.error('播放错误:', error);
      setIsPlaying(false);
      handlePlay(); 
    }
  };

  // ... 其他代码不变 ...

  useEventBus('Global/tts_BlobList', async (blobDataArr) => {
    await processAudioBlob(new Blob(blobDataArr, { type: 'audio/mp3' }));
    if (!isPlaying) {
      handlePlay();
    }
  });

  return (
    <audio
      ref={player}
      onLoadedMetadata={loadedmetadata}
      onEnded={onended}
      onError={(e) => {
        console.error('音频播放错误:', e);
        handlePlay();
      }}
      controls
    />
  );
};

工具函数:

// 把 AudioBuffer 转成Blob
async function audioBufferToBlob(audioBuffer) {
  const wav = audioBufferToWav(audioBuffer);
  return new Blob([wav], { type: 'audio/wav' });
}

// AudioBuffer 转 WAV 格式
function audioBufferToWav(buffer) {
  const numberOfChannels = buffer.numberOfChannels;
  const sampleRate = buffer.sampleRate;
  const format = 1; // PCM
  const bitDepth = 16;
  
  const bytesPerSample = bitDepth / 8;
  const blockAlign = numberOfChannels * bytesPerSample;
  
  const wav = new ArrayBuffer(44 + buffer.length * bytesPerSample);
  const view = new DataView(wav);
  
  // WAV 头部
  writeString(view, 0, 'RIFF');
  view.setUint32(4, 36 + buffer.length * bytesPerSample, true);
  writeString(view, 8, 'WAVE');
  writeString(view, 12, 'fmt ');
  view.setUint32(16, 16, true);
  view.setUint16(20, format, true);
  view.setUint16(22, numberOfChannels, true);
  view.setUint32(24, sampleRate, true);
  view.setUint32(28, sampleRate * blockAlign, true);
  view.setUint16(32, blockAlign, true);
  view.setUint16(34, bitDepth, true);
  writeString(view, 36, 'data');
  view.setUint32(40, buffer.length * bytesPerSample, true);
  
  // 写入采样数据
  const offset = 44;
  const channels = [];
  for (let i = 0; i < buffer.numberOfChannels; i++) {
    channels.push(buffer.getChannelData(i));
  }
  
  for (let i = 0; i < buffer.length; i++) {
    for (let channel = 0; channel < buffer.numberOfChannels; channel++) {
      const sample = Math.max(-1, Math.min(1, channels[channel][i]));
      view.setInt16(offset + (i * blockAlign) + (channel * bytesPerSample),
        sample < 0 ? sample * 0x8000 : sample * 0x7FFF,
        true);
    }
  }
  
  return wav;
}

function writeString(view, offset, string) {
  for (let i = 0; i < string.length; i++) {
    view.setUint8(offset + i, string.charCodeAt(i));
  }
}
撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进
推荐问题