Compare commits

...

No commits in common. "dev" and "rapp/detached-control-value-iframe" have entirely different histories.

438 changed files with 3442 additions and 48276 deletions

View File

@ -1,24 +0,0 @@
**/.classpath
**/.dockerignore
**/.git
**/.gitignore
**/.project
**/.settings
**/.toolstarget
**/.vs
**/.vscode
**/*.*proj.user
**/*.dbmdl
**/*.jfm
**/charts
**/docker-compose*
**/compose*
**/Dockerfile*
**/node_modules
**/npm-debug.log
**/obj
**/secrets.dev.yaml
**/values.dev.yaml
LICENSE
README.md

View File

@ -1,27 +0,0 @@
module.exports = {
root: true,
extends: [
'@nuxtjs/eslint-config',
'@nuxtjs/eslint-config-typescript',
'plugin:jsonc/recommended-with-jsonc'
],
ignorePatterns: [
'assets/',
'**/*.svg',
'**/*.png',
'**/*.md',
'i',
'cert/',
'android',
'ios',
'Dockerfile*',
'*.dev',
'*.ts'
],
rules: {
'vue/singleline-html-element-content-newline': 'off',
'vue/singleline-html-element-content-whitespace': 'off',
'vue/no-v-html': 'error', // Verhindert unsicheres HTML
'@typescript-eslint/no-unused-vars': 0
}
}

59
.gitignore vendored
View File

@ -1,45 +1,24 @@
# Dependency directories
node_modules/
yarn.lock
# Nuxt build output
.nuxt
.nuxt/
dist/
.nuxt-build/
.output/
.env
# Our Noise
!public/masking/fullband.wav
# Capacitor build files
android/
ios/
# Production build files
build/
*.tar
*.tar.gz
# Dotfiles and directories
.env.*
!.env.example
.DS_Store
.gitattributes
**/.DS_Store
# Log files
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
logs/
pnpm-debug.log*
lerna-debug.log*
# Local development
local/
yarn.lock
node_modules
dist
dist-ssr
*.local
# Audio files
*.wav
* text=auto
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?

1
.npmrc
View File

@ -1 +0,0 @@
shamefully-hoist=true

View File

@ -1,5 +0,0 @@
{
"recommendations": [
"dbaeumer.vscode-eslint",
]
}

15
.vscode/launch.json vendored
View File

@ -1,15 +0,0 @@
{
"version": "0.2.0",
"configurations": [
{
"type": "chrome",
"request": "launch",
"name": "Nuxt 3: Chrome starten auf HTTPS",
"url": "https://localhost:3000",
"webRoot": "${workspaceFolder}",
"runtimeArgs": [
"--ignore-certificate-errors"
]
}
]
}

29
.vscode/settings.json vendored
View File

@ -1,29 +0,0 @@
{
<<<<<<< HEAD
"editor.codeActionsOnSave": {
"source.fixAll.eslint": "explicit"
},
"eslint.validate": ["typescript", "javascript", "vue", "json"],
}
=======
"terminal.integrated.env.osx": {
"PATH": ""
},
"terminal.integrated.env.linux": {
"PATH": ""
},
"terminal.integrated.env.windows": {
"PATH": ""
},
"terminal.integrated.defaultProfile.osx": "bash",
"terminal.integrated.profiles.osx": {
"bash": {
"path": "bash",
"args": [
"-l"
]
}
},
"nodejs.runtime": "/usr/local/bin/node"
}
>>>>>>> simple_visualisation

View File

@ -1,57 +0,0 @@
# Architecture
This project is the frontend implementation of Mindboosts Web Application. It is a Vue 3 Nuxt 3 application running on an node server without serverside rendering.
It consists basically of an login and registration functionality, an onboarding to get relevant information like the input device and personal preferences, an settings menu, four index pages that contain the RNBO player which is needed to use the patented algorithm and the place of all audio output.
**login & regsistration**
This branches are used to experiment for your own on a specific task but always have a personal backup
Path Styling: `/yourname/task-or-feature-name`
**onboarding**
This branches are used to add team wide tooling features like eslint for better coding experience
Path Styling `/tooling/tool-beeing-applied`
**settings menu**
This branches are used to develop features for the app
Path Styling `/feature/name-of-the-branch`
**index pages**
This branches are used to develop features for the app
Path Styling `/feature/name-of-the-branch`
**RNBO Player**
The RNBO player is a vue component that provides the audio output via the WebAudio-API of the application. It loads two RNBO devices (`music device`, `noise device`), each has its own `patch.export.json` (exported from RNBO-Application). The devices will be loaded as `Audio Nodes` so that we can connect other nodes of the same Audio Context. The `music device` has three inputs: on channel 0 we attach an `MediaStreamSourceNode` which is a direct stream of the users choosed microphone, on channel 1 we attach the left channel of an `ChannelSplitterNode`, on channel 2 we attach the right channel of the `ChannelSplitterNode`. The two output channels of the `music device` are attached to the `noise device` the second RNBO device and to the music-`GainNode`. The `noise device` has five input channels: on channel 0 we attach the same `MediaStreamSourceNode` we use for the `music device` and on channel 1 we attach the left channel of a `ChannelSplitterNode` and on channel 2 we attach the right channel. This `ChannelSplitterNode` gets two input channels from an `BufferAudioSourceNode` which is a buffer containing the masking noise. The masking noise is fetched from the resources provided in the public folder of the node server. Channel 3 of our `noise device` is the left output of the `music device` passed though another `ChannelSplitterNode` and the right output on channel 4.
The output of our `noise device` is attached to the noise-`GainNode`. The two gain nodes (music-`GainNode` and noise-`GainNode`) are connected with the destination of the AudioContext. The destination is the user selected audio output, you should hear the music and the noise. The GainNodes gain value can be managed via the UI-sliders of the RNBOPlayer component.
**Use of Web Audio API**
The following pages or components make use of Web Audio
* AcusticCheck.vue : For analysing purposes.
* RNBOPlayer.vue : To create the RNBO Devices and attached them to the right sources and destionations as descriped in RNBOPlayer above
* homeforest.vue : Require access to the Microphone for updating the vue meter that shows an simple dynamic bar chart of the users microphone input
* homemeadow.vue : : Require access to the Microphone for updating the vue meter...
* hometropics.vue : Require access to the Microphone for updating the vue meter...
* index.vue : Require access to the Microphone for updating the vue meter...
* onboarding.vue : Require access to the Microphone for updating the vue meter...
**Path Management**
In this project, some files are linked to our algorithm. To modify the paths it is possible to change for example the source of the audio files via the .env-file.
**AudioStore**
One problem we faceing within the application is due to the current achitecture every time we change the page
the old page is unmounted and with it the AudioContext. As the AudioContext need to be unlocked via user interaction it requires always an user input before we can use the WebAudio API. To avoid this behaviour we decided to have a shared audio context, that only needed to be unlocked once. So that we could make use of it in home pages of our soundscapes like e.g. `homeforest.vue` and within the `RNBOPlayer.vue`
We use an pinia store for it. The audiostore provides different functionality for the whole application as long as it is imported correctly.
It has a shared state...
* audioContext : The audio context we create within the action `initializeAudioContext()` by the use of Howler.js. Implemeented with Howler.js what comes along with the functionalities for automatically unlock the AudioContext.
* microphone : An object of class `Microphone` defined aswell in audio.ts. Its a wrapper for the microphoneStream an instance of `MediaStream` and microphoneNode an instance of `MediaStreamSourceNode`. The nodes are used to access the microphone, our `microphoneNode` are connected to the RNBO device and the MediaStream is used for updating the vue meter.
* nodes : An array of `AudioNodeItem` objects. `AudioNodeItem` are wrappers to track that type and state of available `AudioNode`-objects in our application.
* headsetType : The type of the headset the user selected. This is important to calculate the correct attention factor in RNBODevice
* ancDevice : The users selection if it has an headset with active noise cancelling (ANC) functionality. This is used to calculate the correct attention factor
* connectedSoundScape : This is the soundscape selected by the user.
* playing : This is the shared state if currently the audio is playing or not. This is used by UI elements in homebar, the bottombar and the RNBOPlayer to display the correct play / pause button
* acusticCheckResult : This is the calculated result of the current acustic check result, but is not used so far
* audioFiles : This is a array of AudioBuffers when loaded into the memory. After fetching the audio sources from the node server the audio files are saved in the indexed db. If a buffer is loaded from the indexeddb or direct from the axios response it will be saved in the audioFiles. This makes it easy to create new 'AudioNode'-like objects from preloaded buffers. This is the most important to avoid long loading time and delays.
* loadingBuffers : This is a store internal state representing that currently one or more buffers being fetched or processed
...and shared actions
* to be done

View File

@ -1,25 +0,0 @@
FROM node:18.19.1 as builder
WORKDIR /app
ARG BACKEND_URL
ENV BACKEND_URL ${BACKEND_URL}
ENV NODE_OPTIONS=--openssl-legacy-provider
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM node:18-slim as run
WORKDIR /app
ARG BACKEND_URL
ENV BACKEND_URL ${BACKEND_URL}
COPY package*.json ./
COPY --from=builder /app/.output .output
RUN ls -la /app
EXPOSE 3000
CMD [ "node", ".output/server/index.mjs"]

View File

@ -1,20 +0,0 @@
FROM node:18.14.2
WORKDIR /app
ENV NODE_OPTIONS=--openssl-legacy-provider
# Installiere Abhängigkeiten
COPY package*.json ./
RUN npm install
# Kopiere restliche Dateien und baue das Projekt
COPY . .
RUN npm run build
ENV PATH /app/node_modules/.bin:$PATH
# Exponiere den Port für den Entwicklungsserver
EXPOSE 3000
# Starte den Entwicklungsserver, wenn der Container ausgeführt wird
CMD ["npm", "run", "dev"]

70
Jenkinsfile vendored
View File

@ -1,70 +0,0 @@
pipeline {
agent any
stages {
stage('Checkout') {
steps {
script {
checkout([
$class: 'GitSCM',
branches: [[name: 'pipeline/deploy-image']],
userRemoteConfigs: [[
url: 'https://gitea.mindboost.team/Mindboost/mindboost-webapp.git',
credentialsId: 'b5f383be-8c74-40f9-b7e1-3a9c5856df0e'
]]
])
}
}
}
stage('Check Repository') {
steps {
script {
sh 'pwd' // Zeigt das aktuelle Verzeichnis
sh 'ls -la' // Prüft, ob das .git-Verzeichnis existiert
sh 'git status' // Stellt sicher, dass das Repo korrekt initialisiert ist
}
}
}
stage('Get Commit Hash') {
steps {
script {
env.WEBAPP_COMMIT_HASH = sh(
script: 'git rev-parse --short HEAD',
returnStdout: true
).trim()
echo "Commit Hash: ${env.WEBAPP_COMMIT_HASH}"
}
}
}
stage('Check Docker Image with the same tag') {
steps {
script {
def imageExists = sh(
script: "docker images -q mindboost_frontend_image:${env.WEBAPP_COMMIT_HASH} || true",
returnStdout: true
).trim()
if (imageExists) {
echo "Docker Image mit Tag ${env.WEBAPP_COMMIT_HASH} existiert bereits. Überspringe Build."
currentBuild.result = 'SUCCESS'
return
} else {
echo "Kein vorhandenes Docker Image gefunden. Baue neues Image..."
}
}
}
}
stage('Build Docker Image') {
when {
expression { currentBuild.result == null } // Nur wenn vorher kein Image existierte
}
steps {
script {
sh "docker build --build-arg BACKEND_URL=https://b.mindboost.team --rm -t mindboost_frontend_image:${env.WEBAPP_COMMIT_HASH} . "
}
}
}
}
}

150
README.md
View File

@ -1,150 +0,0 @@
# Contribution
If you want to contribute, pull the branch `dev` and create a new branch from it. You are free to push the branch up to have a personal backup from your contributions.
we recommed three types of branches:
**Personal Branch**
This branches are used to experiment for your own on a specific task but always have a personal backup
Path Styling: `/yourname/task-or-feature-name`
**Tooling branches**
This branches are used to add team wide tooling features like eslint for better coding experience
Path Styling `/tooling/tool-beeing-applied`
**Feature branches**
This branches are used to develop features for the app
Path Styling `/feature/name-of-the-branch`
## Merging branches
The recommended way to merge is a pull request on the `dev` branch. This requires that eslint is free of issues and the application compiles successfully
Please commit all your changes in that branch and merge it (after another pull) into `dev` and push it. So everybody is up to date and we have no issues integrating different versions.
# Logging
This project used pino for logging on client and server side. Currently only client side is tested.
## How to use:
In pages and components, that are running in the client, you should use for Logging `this.$logger`
this.$logger.trace("Trace or Trance")
this.$logger.error("This is an error")
this.$logger.debug("This is a debugging log.")
this.$logger.info("This is a info log.")
this.$logger.warn("This is a warning log.")
this.$logger.fatal({...data, level: 'fatal'})
Please use this especially in catch blocks. It used pino as framework and can also provide logging.
## Setup
If you don't use Node 19 or lower than you need to work around with the node version manager.
install node version mananger, register it with that code in your terminal.
```sh
export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
```
Make sure you install node 18/19 and then run nvm use 18/19 before you continue.
```bash
nvm install 19
````
```bash
nvm use 19
````
Make sure to install the dependencies:
```bash
# yarn
yarn install
# npm
npm install
# pnpm
pnpm install --shamefully-hoist
```
## Development Server
Start the development server:
```bash
npm run dev
```
## Linting
Start the linting
```bash
npm run lint
```
```bash
npm run lint:fix
```
## Production
Build the application for production:
```bash
npm run build
```
Locally preview production build:
```bash
npm run preview
```
Check out the [deployment documentation](https://nuxt.com/docs/getting-started/deployment) for more information.
## Container
You can build the app as an Docker Container with the existing Dockerfile. The image can be deployed via an docker compose file that is not part of the current frontend project but an composition of the docker frontend and backend container. It can be found at our current strato development server and on gitea.
## Mobile
You can build the app for android and iOS !but this project still uses the browser API for accessing media of the user. By this reason the app actually not running on android nor iOs.
Build the application for production:
```bash
npm run generate
```
Create with capacitor the android build:
Capacitor should be installed already as an npm package, but check this first if you run into issues.
```bash
npx cap add android
```
```bash
npx cap add ios
```
Update with capacitor the android/ios build (after you changed something)
```bash
npx cap sync android
```
```bash
npx cap sync android
```
Open the app in Android Studio or Apple'S XCode
```bash
npx cap open android
```
```bash
npx cap open ios
```
Check out the [deployment documentation](https://nuxt.com/docs/getting-started/deployment) for more information.
You can also checkout capacitor, vue 3.

View File

@ -1,66 +0,0 @@
<template>
<div v-if="false">AudioStateHandler</div>
</template>
<script>
import { mapState, mapActions } from 'pinia'
import { useAudioStore } from '~/stores/audio'
export default {
name: 'AudioStateHandler',
computed: {
...mapState(useAudioStore, ['playing'])
},
watch: {
playing (newValue) {
this.$logger.log('Global playing state changed', newValue)
}
},
mounted () {
window.addEventListener('keydown', this.handleKeyDown)
this.$logger.log('Global audio state handler mounted')
// Set up MediaSession API
if ('mediaSession' in navigator) {
navigator.mediaSession.setActionHandler('play', () => this.setPlaying(true))
navigator.mediaSession.setActionHandler('pause', () => this.setPlaying(false))
}
},
beforeUnmount () {
window.removeEventListener('keydown', this.handleKeyDown)
},
methods: {
...mapActions(useAudioStore, ['setPlaying'])
/** handleKeyDown (e) {
const activeElement = document.activeElement
const tagName = activeElement.tagName.toLowerCase()
// List of elements where spacebar interaction should be preserved
const interactiveElements = [
'input', 'textarea', 'button', 'select', 'option',
'video', 'audio', 'a', 'summary'
]
// Check for contenteditable attribute
const isContentEditable = activeElement.getAttribute('contenteditable') === 'true'
// Check for custom data attribute that might indicate spacebar interaction
const usesSpacebar = activeElement.getAttribute('data-uses-spacebar') === 'true'
if (e.code === 'Space' &&
!interactiveElements.includes(tagName) &&
!isContentEditable &&
!usesSpacebar) {
this.$logger.log('Space key pressed in AudioStateHandler')
e.preventDefault() // Prevent the default action (scrolling)
this.setPlaying(!this.playing)
this.$logger.log('New playing state:', this.playing)
}
} */
}
}
</script>

View File

@ -1,10 +0,0 @@
<!-- eslint-disable vue/multi-word-component-names -->
<template>
<div>
<RNBOPlayer />
</div>
</template>
<script setup>
import RNBOPlayer from '~/archive/components/tests/RNBOPlayer.vue'
</script>

View File

@ -1,24 +0,0 @@
<!-- eslint-disable vue/valid-template-root -->
<template></template>
<script>
import { usePlayerControls } from '@/composables/usePlayerControls'
export default {
name: 'AudioStateHandler',
watch: {
playing (newValue) {
this.$logger.log('Global playing state changed', newValue)
}
},
mounted () {
// usePlayerControls().addSpaceListener()
// Set up MediaSession API
if ('mediaSession' in navigator) {
// usePlayerControls().addMediaControls()
}
}
}
</script>

View File

@ -1,158 +0,0 @@
<template>
<div class="player">
<div class="slider-wrapper">
<div v-if="muted" @click="toggleMute">
<img style="width: 25px" src="~/assets/image/noiseicon_muted.svg">
</div>
<div v-else @click="toggleMute">
<img style="width: 25px" src="~/assets/image/noiseicon.svg">
</div>
<div class="slider">
<input
id="gain-control-noise"
v-model="volume"
type="range"
min="0"
max="1"
step="0.005"
@wheel.prevent="changeVolumeOnWheel"
>
<span class="slider-progress-bar" :style="{ width: `${volume * 100}%` }" />
</div>
</div>
<NoiseControlledWebAudio3Band
v-for="(frequency, index) in frequencies"
ref="Player"
:key="frequency"
:master-attack="masterAttack"
:master-release="masterRelease"
:center-frequency="frequency"
:master-gain="masterGain"
:q-factor="qFactors[index]"
@ready="onBandReady"
@update:mid-volume="controlMusicGain"
/>
</div>
</template>
<script lang="ts">
import { useAudioStore } from '../../stores/audio'
import { useMicStore } from '~/stores/microphone'
import type { Microphone } from '~/stores/interfaces/Microphone'
import NoiseControlledWebAudio3Band from '~/components/experiments/tests/ControlValues/NoiseControlledWebAudio3Band'
export default {
name: 'AdaptiveNoiseGain',
components: {
NoiseControlledWebAudio3Band
},
emits: ['musicGain'],
setup () {
const masterGain = ref(useAudioStore().getMasterGainNoise())
const player = ref(null)
const { t } = useI18n()
const frequencies = ref([150, 1500, 8000])
const qFactors = ref([0.8, 0.9, 0.6])
const loadedBands = ref(0)
const muted = computed(() => useAudioStore().getNoiseVolume < 0.01)
let oldVolume = 0
const route = useRoute()
const isExperimentsRoute = computed(() => route.path.match(/\/[a-z]{2}\/experiments/))
const masterAttack = ref(120000 * 2) // Beispielwert in Samples
const masterRelease = ref(144000 * 2)
const loading = computed(() => loadedBands.value < frequencies.value.length)
const onBandReady = () => {
loadedBands.value++
}
const toggleMute = () => {
if (!muted.value) {
oldVolume = masterGain.value.gain.value
masterGain.value.gain.linearRampToValueAtTime(0, masterGain.value.context.currentTime + 0.4)
useAudioStore().setNoiseVolume(0)
} else if (oldVolume > 0) {
masterGain.value.gain.linearRampToValueAtTime(oldVolume, masterGain.value.context.currentTime + 0.4)
useAudioStore().setNoiseVolume(oldVolume)
} else {
masterGain.value.gain.linearRampToValueAtTime(1, masterGain.value.context.currentTime + 0.4)
useAudioStore().setNoiseVolume(1)
}
}
return {
frequencies,
loading,
onBandReady,
t,
loadedBands,
masterAttack,
masterRelease,
isExperimentsRoute,
qFactors,
masterGain,
toggleMute,
muted,
player
}
},
data () {
return {
audioContext: useAudioStore().getContext(),
musicReady: false,
tropics_src: window.location.origin + useRuntimeConfig().public.tracks.masking_src as string,
fading: false,
connected: false,
volume: useAudioStore().noiseVolume,
previousVolume: useAudioStore().noiseVolume
}
},
onMounted () {
},
watch: {
volume (newVolume: number) {
const audioStore = useAudioStore()
audioStore.setNoiseVolume(newVolume)
if (!isNaN(newVolume)) { audioStore.getMasterGainNoise().gain.linearRampToValueAtTime(newVolume, 0.125) }
const m = this.muted
}
},
beforeUnmount () {
const micro = useMicStore().getMicrophone() as Microphone
micro.microphoneStream?.getTracks().forEach(m => m.stop())
},
methods: {
changeVolumeOnWheel (event:WheelEvent) {
// Adjust volume on wheel scroll
const gainValue = this.volume
const deltaY = event.deltaY
if (deltaY < 0) {
const volumeAdd = (Math.min(1, gainValue + 0.02))
this.volume = volumeAdd
} else {
const volumeCut = (Math.max(0, gainValue - 0.02))
this.volume = volumeCut
}
},
controlMusicGain (value: string) {
this.$emit('musicGain', value)
},
handleCanPlayNoise () {
// useNuxtApp().$logger.log('NoiseElemeint has now playingstate: ' + state)
this.musicReady = true
},
readyForWebaudio () {
if (!this.musicReady) {
// useNuxtApp().$logger.log('music not ready')
return false
}
return true
}
}
}
</script>

View File

@ -1,128 +0,0 @@
<template>
<AudioElement
ref="Noise"
key="5"
:src="noise_src"
tooltip-title="Click, scroll or touch to change volume of noise"
title="Noise"
@update:volume="updateNoiseGain"
>
<template #default="{}">
<div class="icon">
<img style="width: 25px" src="~/assets/image/noiseicon.svg">
</div>
</template>
</AudioElement>
<AudioElement
ref="Music"
key="1"
:src="lagoon_src"
tooltip-title="Click, scroll or touch to change volume of music"
title="Lagoon"
@update:volume="updateMusicGain"
@update:playing="handlePlayingUpdate"
@update:fadeout="fadeOutMusic"
>
<template #default="{ }">
<div class="icon">
<!-- tropic icon -->
<img style="width: 25px" src="~/assets/image/musicicon.svg">
</div>
</template>
<!-- Slot content for AudioElement, if needed -->
</AudioElement>
</template>
<script lang="ts">
import AudioElement from '~/components/experiments/AudioElement.vue'
import { useAudioStore } from '~/stores/audio'
export default {
name: 'NoiseMusicGainLagoon',
components: { AudioElement },
data () {
return {
audioContext: useAudioStore().getContext(),
playing: false,
paused: false,
createdNodes: {} as any,
lagoon_src: window.location.origin + useRuntimeConfig().public.tracks.lagoon_src as string,
noise_src: window.location.origin + useRuntimeConfig().public.noise_src as string
}
},
mounted () {
if (this.audioContext) {
// useNuxtApp().$logger.log('Audiocontext available ', this.audioContext)
}
// useNuxtApp().$logger.log('I created two AudioElements')
},
methods: {
// This method helps to get the ressources free when we stop playing the audio
// without it would be louder each time we start playing
refreshAudioContext () {
const newAudioContext = new AudioContext()
this.audioContext.close()
useAudioStore().audioContext = newAudioContext
this.audioContext = useAudioStore().getContext()
},
fadeOutMusic () {
if (this.createdNodes.noiseGain) {
const noiseGainValue = this.createdNodes.noiseGain.gain.value
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(noiseGainValue, this.audioContext.currentTime)
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(0, this.audioContext.currentTime + 1.3)
}
if (this.createdNodes.musicGain) {
const musicGainValue = this.createdNodes.noiseGain.gain.value
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(musicGainValue, this.audioContext.currentTime)
this.createdNodes.musicGain.gain.linearRampToValueAtTime(0, this.audioContext.currentTime + 1.3)
}
},
handlePlayingUpdate (state: boolean) {
if (state) {
const noiseElement = this.$refs.Noise as typeof AudioElement
const noiseAudioElement = noiseElement.$refs.audioElement as HTMLMediaElement
const musicElement = this.$refs.Music as typeof AudioElement
const musicAudioElement = musicElement.$refs.audioElement as HTMLMediaElement
const audioContext = this.audioContext
const destination = this.audioContext.destination
this.createdNodes.musicGain ||= audioContext.createGain()
this.createdNodes.noiseGain ||= audioContext.createGain()
this.createdNodes.musicGain.gain.setValueAtTime(0, audioContext.currentTime)
this.createdNodes.noiseGain.gain.setValueAtTime(0, audioContext.currentTime)
this.createdNodes.noiseSource = audioContext.createMediaElementSource(noiseAudioElement)
this.createdNodes.musicSource = audioContext.createMediaElementSource(musicAudioElement)
this.createdNodes.noiseSource.connect(this.createdNodes.noiseGain)
this.createdNodes.musicSource.connect(this.createdNodes.musicGain)
this.createdNodes.noiseGain.connect(destination)
this.createdNodes.musicGain.connect(destination)
musicAudioElement.muted = false
noiseAudioElement.muted = false
this.createdNodes.noiseGain.gain.cancelScheduledValues(this.audioContext.currentTime)
this.createdNodes.musicGain.gain.cancelScheduledValues(this.audioContext.currentTime)
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(1, this.audioContext.currentTime + 1.3)
this.createdNodes.musicGain.gain.linearRampToValueAtTime(1, this.audioContext.currentTime + 1.3)
useAudioStore().playing = true
} else {
// Music has just stopped react on it.
// useNuxtApp().$logger.log('Stop everything webaudio is still running')
this.createdNodes = []
this.refreshAudioContext()
}
},
updateNoiseGain (volume: number) {
if (this.createdNodes.noiseGain) {
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(volume, this.createdNodes.noiseGain.context.currentTime + 0.30)
}
},
updateMusicGain (volume: number) {
if (this.createdNodes.musicGain) {
this.createdNodes.musicGain.gain.linearRampToValueAtTime(volume, this.createdNodes.musicGain.context.currentTime + 0.30)
}
}
}
}
</script>

View File

@ -1,123 +0,0 @@
<template>
<AudioElement
ref="Noise"
key="5"
:src="noise_src"
title="Noise"
@update:volume="updateNoiseGain"
>
<template #default="{}">
<div class="icon">
<img style="width: 25px" src="~/assets/image/noiseicon.svg">
</div>
</template>
</AudioElement>
<AudioElement
ref="Music"
key="1"
:src="meadow_src"
title="Meadow"
@update:volume="updateMusicGain"
@update:playing="handlePlayingUpdate"
@update:fadeout="fadeOutMusic"
>
<template #default="{ }">
<div class="icon">
<!-- tropic icon -->
<img style="width: 25px" src="~/assets/image/musicicon.svg">
</div>
</template>
<!-- Slot content for AudioElement, if needed -->
</AudioElement>
</template>
<script lang="ts">
import AudioElement from '~/components/experiments/AudioElement.vue'
import { useAudioStore } from '~/stores/audio'
export default {
name: 'NoiseMusicGainMeadow',
components: { AudioElement },
data () {
return {
audioContext: useAudioStore().getContext(),
playing: false,
paused: false,
createdNodes: {} as any,
meadow_src: window.location.origin + useRuntimeConfig().public.tracks.meadow_src as string,
noise_src: window.location.origin + useRuntimeConfig().public.noise_src as string
}
},
mounted () {
},
methods: {
// This method helps to get the ressources free when we stop playing the audio
// without it would be louder each time we start playing
refreshAudioContext () {
const newAudioContext = new AudioContext()
this.audioContext.close()
useAudioStore().audioContext = newAudioContext
this.audioContext = useAudioStore().getContext()
},
fadeOutMusic () {
if (this.createdNodes.noiseGain) {
const noiseGainValue = this.createdNodes.noiseGain.gain.value
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(noiseGainValue, this.audioContext.currentTime)
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(0, this.audioContext.currentTime + 1.3)
}
if (this.createdNodes.musicGain) {
const musicGainValue = this.createdNodes.noiseGain.gain.value
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(musicGainValue, this.audioContext.currentTime)
this.createdNodes.musicGain.gain.linearRampToValueAtTime(0, this.audioContext.currentTime + 1.3)
}
},
handlePlayingUpdate (state: boolean) {
if (state) {
const noiseElement = this.$refs.Noise as typeof AudioElement
const noiseAudioElement = noiseElement.$refs.audioElement as HTMLMediaElement
const musicElement = this.$refs.Music as typeof AudioElement
const musicAudioElement = musicElement.$refs.audioElement as HTMLMediaElement
const audioContext = this.audioContext
const destination = this.audioContext.destination
this.createdNodes.musicGain ||= audioContext.createGain()
this.createdNodes.noiseGain ||= audioContext.createGain()
this.createdNodes.musicGain.gain.setValueAtTime(0, audioContext.currentTime)
this.createdNodes.noiseGain.gain.setValueAtTime(0, audioContext.currentTime)
this.createdNodes.noiseSource = audioContext.createMediaElementSource(noiseAudioElement)
this.createdNodes.musicSource = audioContext.createMediaElementSource(musicAudioElement)
this.createdNodes.noiseSource.connect(this.createdNodes.noiseGain)
this.createdNodes.musicSource.connect(this.createdNodes.musicGain)
this.createdNodes.noiseGain.connect(destination)
this.createdNodes.musicGain.connect(destination)
musicAudioElement.muted = false
noiseAudioElement.muted = false
this.createdNodes.noiseGain.gain.cancelScheduledValues(this.audioContext.currentTime)
this.createdNodes.musicGain.gain.cancelScheduledValues(this.audioContext.currentTime)
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(1, this.audioContext.currentTime + 1.3)
this.createdNodes.musicGain.gain.linearRampToValueAtTime(1, this.audioContext.currentTime + 1.3)
useAudioStore().playing = true
} else {
// Music has just stopped react on it.
// useNuxtApp().$logger.log('Stop everything webaudio is still running')
this.createdNodes = []
this.refreshAudioContext()
}
},
updateNoiseGain (volume: number) {
if (this.createdNodes.noiseGain) {
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(volume, this.createdNodes.noiseGain.context.currentTime + 0.30)
}
},
updateMusicGain (volume: number) {
if (this.createdNodes.musicGain) {
this.createdNodes.musicGain.gain.linearRampToValueAtTime(volume, this.createdNodes.musicGain.context.currentTime + 0.30)
}
}
}
}
</script>

View File

@ -1,117 +0,0 @@
<template>
<AudioElement2
ref="Noise"
key="5"
:src="noise_src"
title="Noise"
@update:volume="updateNoiseGain"
>
<template #default="{}">
<img style="width: 25px" src="~/assets/image/noiseicon.svg">
</template>
</AudioElement2>
<AudioElement2
ref="Music"
key="1"
:src="meadow_src"
title="Tropics"
@update:volume="updateMusicGain"
@update:playing="handlePlayingUpdate"
>
<template #default="{ }">
<img style="width: 25px" src="~/assets/image/musicicon.svg">
</template>
</AudioElement2>
</template>
<script lang="ts">
import AudioElement2 from '~/components/experiments/AudioElement2.vue'
import { useAudioStore } from '~/stores/audio'
export default {
name: 'NoiseMusicGainTropics',
components: { AudioElement2 },
data () {
return {
audioContext: useAudioStore().getContext(),
playing: false,
paused: false,
createdNodes: {} as any,
noise_src: window.location.origin + useRuntimeConfig().public.noise_src,
meadow_src: window.location.origin + useRuntimeConfig().public.tracks.meadow_src
}
},
mounted () {
},
methods: {
// This method helps to get the ressources free when we stop playing the audio
// without it would be louder each time we start playing
refreshAudioContext () {
const newAudioContext = new AudioContext()
this.audioContext.close()
useAudioStore().audioContext = newAudioContext
this.audioContext = useAudioStore().getContext()
},
fadeOutMusic () {
if (this.createdNodes.noiseGain) {
const noiseGainValue = this.createdNodes.noiseGain.gain.value
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(noiseGainValue, this.audioContext.currentTime)
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(0, this.audioContext.currentTime + 1.3)
}
if (this.createdNodes.musicGain) {
const musicGainValue = this.createdNodes.noiseGain.gain.value
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(musicGainValue, this.audioContext.currentTime)
this.createdNodes.musicGain.gain.linearRampToValueAtTime(0, this.audioContext.currentTime + 1.3)
}
},
handlePlayingUpdate (state: boolean) {
if (state) {
const noiseElement = this.$refs.Noise as typeof AudioElement2
const noiseAudioElement = noiseElement.$refs.audioElement as HTMLMediaElement
const musicElement = this.$refs.Music as typeof AudioElement2
const musicAudioElement = musicElement.$refs.audioElement as HTMLMediaElement
const audioContext = this.audioContext
const destination = this.audioContext.destination
this.createdNodes.musicGain ||= audioContext.createGain()
this.createdNodes.noiseGain ||= audioContext.createGain()
this.createdNodes.musicGain.gain.setValueAtTime(0, audioContext.currentTime)
this.createdNodes.noiseGain.gain.setValueAtTime(0, audioContext.currentTime)
this.createdNodes.noiseSource = audioContext.createMediaElementSource(noiseAudioElement)
this.createdNodes.musicSource = audioContext.createMediaElementSource(musicAudioElement)
this.createdNodes.noiseSource.connect(this.createdNodes.noiseGain)
this.createdNodes.musicSource.connect(this.createdNodes.musicGain)
this.createdNodes.noiseGain.connect(destination)
this.createdNodes.musicGain.connect(destination)
musicAudioElement.muted = false
noiseAudioElement.muted = false
this.createdNodes.noiseGain.gain.cancelScheduledValues(this.audioContext.currentTime)
this.createdNodes.musicGain.gain.cancelScheduledValues(this.audioContext.currentTime)
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(1, this.audioContext.currentTime + 1.3)
this.createdNodes.musicGain.gain.linearRampToValueAtTime(1, this.audioContext.currentTime + 1.3)
useAudioStore().playing = true
} else {
// Music has just stopped react on it.
// useNuxtApp().$logger.log('Stop everything webaudio is still running')
this.createdNodes = []
this.refreshAudioContext()
}
},
updateNoiseGain (volume: number) {
if (this.createdNodes.noiseGain) {
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(volume, this.createdNodes.noiseGain.context.currentTime + 0.30)
}
},
updateMusicGain (volume: number) {
if (this.createdNodes.musicGain) {
this.createdNodes.musicGain.gain.linearRampToValueAtTime(volume, this.createdNodes.musicGain.context.currentTime + 0.30)
}
}
}
}
</script>

View File

@ -1,122 +0,0 @@
<template>
<div>
<AudioElement
ref="Noise"
key="5"
:src="noise_src"
title="Noise"
@update:volume="updateNoiseGain"
>
<template #default="{}">
<img style="width: 25px" src="~/assets/image/noiseicon.svg">
</template>
</AudioElement>
<AudioElement
ref="Music"
key="1"
:src="tropics_src"
title="Tropics"
@update:volume="updateMusicGain"
@update:playing="handlePlayingUpdate"
>
<template #default="{ }">
<img style="width: 25px" src="~/assets/image/musicicon.svg">
</template>
</AudioElement>
</div>
</template>
<script lang="ts">
import AudioElement from '../AudioElement.vue'
import { useAudioStore } from '../../stores/audio'
export default {
name: 'NoiseMusicGainTropics',
components: { AudioElement },
data () {
return {
audioContext: useAudioStore().getContext(),
playing: false,
paused: false,
createdNodes: {} as any,
noise_src: window.location.origin + useRuntimeConfig().public.noise_src,
tropics_src: window.location.origin + useRuntimeConfig().public.tracks.tropics_src
}
},
mounted () {
if (this.audioContext) {
// useNuxtApp().$logger.log('Audiocontext available ', this.audioContext)
}
// useNuxtApp().$logger.log('I created two AudioElements')
},
methods: {
// This method helps to get the ressources free when we stop playing the audio
// without it would be louder each time we start playing
refreshAudioContext () {
const newAudioContext = new AudioContext()
this.audioContext.close()
useAudioStore().audioContext = newAudioContext
this.audioContext = useAudioStore().getContext()
},
fadeOutMusic () {
if (this.createdNodes.noiseGain) {
const noiseGainValue = this.createdNodes.noiseGain.gain.value
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(noiseGainValue, this.audioContext.currentTime)
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(0, this.audioContext.currentTime + 1.3)
}
if (this.createdNodes.musicGain) {
const musicGainValue = this.createdNodes.noiseGain.gain.value
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(musicGainValue, this.audioContext.currentTime)
this.createdNodes.musicGain.gain.linearRampToValueAtTime(0, this.audioContext.currentTime + 1.3)
}
},
handlePlayingUpdate (state: boolean) {
if (state) {
const noiseElement = this.$refs.Noise as typeof AudioElement
const noiseAudioElement = noiseElement.$refs.audioElement as HTMLMediaElement
const musicElement = this.$refs.Music as typeof AudioElement
const musicAudioElement = musicElement.$refs.audioElement as HTMLMediaElement
const audioContext = this.audioContext
const destination = this.audioContext.destination
this.createdNodes.musicGain ||= audioContext.createGain()
this.createdNodes.noiseGain ||= audioContext.createGain()
this.createdNodes.musicGain.gain.setValueAtTime(0, audioContext.currentTime)
this.createdNodes.noiseGain.gain.setValueAtTime(0, audioContext.currentTime)
this.createdNodes.noiseSource = audioContext.createMediaElementSource(noiseAudioElement)
this.createdNodes.musicSource = audioContext.createMediaElementSource(musicAudioElement)
this.createdNodes.noiseSource.connect(this.createdNodes.noiseGain)
this.createdNodes.musicSource.connect(this.createdNodes.musicGain)
this.createdNodes.noiseGain.connect(destination)
this.createdNodes.musicGain.connect(destination)
musicAudioElement.muted = false
noiseAudioElement.muted = false
this.createdNodes.noiseGain.gain.cancelScheduledValues(this.audioContext.currentTime)
this.createdNodes.musicGain.gain.cancelScheduledValues(this.audioContext.currentTime)
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(1, this.audioContext.currentTime + 1.3)
this.createdNodes.musicGain.gain.linearRampToValueAtTime(1, this.audioContext.currentTime + 1.3)
useAudioStore().playing = true
} else {
// Music has just stopped react on it.
// useNuxtApp().$logger.log('Stop everything webaudio is still running')
this.createdNodes = []
this.refreshAudioContext()
}
},
updateNoiseGain (volume: number) {
if (this.createdNodes.noiseGain) {
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(volume, this.createdNodes.noiseGain.context.currentTime + 0.30)
}
},
updateMusicGain (volume: number) {
if (this.createdNodes.musicGain) {
this.createdNodes.musicGain.gain.linearRampToValueAtTime(volume, this.createdNodes.musicGain.context.currentTime + 0.30)
}
}
}
}
</script>

View File

@ -1,375 +0,0 @@
<template>
AudioElementManager
<div class="rnboplayer">
<AudioElement
:ref="noise.title"
:key="noise.id"
:src="noise.src"
:title="noise.title"
@update:volume="updateNoiseGain"
@update:loaded="noiseReady=true"
>
<template #default="{}">
<div class="icon">
<img style="width: 25px" src="~/assets/image/noiseicon.svg">
</div>
</template>
</AudioElement>
<AudioElement
:ref="selectedAudio.title"
:key="selectedAudio.id"
:src="selectedAudio.src"
:title="selectedAudio.title"
@update:volume="updateMusicGain"
@update:playing="handlePlayingUpdate"
@update:fadeout="fadeOutGains"
@update:loaded="musicReady=true"
>
<template #default="{ }">
<div class="icon">
<!-- tropic icon -->
<img style="width: 25px" src="~/assets/image/musicicon.svg">
</div>
</template>
</AudioElement>
</div>
</template>
<script>
import AudioElement from '@/components/experiments/AudioElement.vue'
import importedMusicPatcher from '@/assets/patch/music_patch.export.json'
import importedNoisePatcher from '@/assets/patch/noise_patch.export.json'
import { createRNBODevice } from '@/lib/AudioFunctions'
import { useAudioStore } from '~/stores/audio'
// import setupNodes from '@/components/Player/Nodes'
// import { useAudioStore } from '~/stores/audio'
export default {
components: {
AudioElement
},
props: {
soundscape: {
type: String,
default: 'Lagoon'
}
},
data () {
return {
audioList: [
{ id: 1, title: 'Lagoon', src: window.location.origin + '/sounds/lagoon.m4a' },
{ id: 2, title: 'Forest', src: window.location.origin + '/sounds/forest.m4a' },
{ id: 3, title: 'Meadow', src: window.location.origin + '/sounds/meadow.m4a' },
{ id: 4, title: 'Tropics', src: window.location.origin + '/sounds/tropics.m4a' }
],
noise: { id: 5, title: 'Noise', src: window.location.origin + '/masking/noise.flac' },
selectedAudioTitle: '',
currentElement: {},
createdNodes: {},
musicPatch: importedMusicPatcher,
noisePatch: importedNoisePatcher,
audioContext: useAudioStore().getContext()
}
},
computed: {
selectedAudio () {
return this.audioList.find(audio => audio.title === this.selectedAudioTitle) || null
}
},
beforeUnmount () {
this.disconnectNodes()
if (this.audioContext) { this.audioContext.close() }
this.audioContext = null
},
beforeMount () {
this.audioContext = new AudioContext()
},
mounted () {
if (this.soundscape) {
this.audioList = this.audioList.filter(audio => audio.title.toLowerCase() === this.soundscape.toLowerCase())
}
this.selectAudioByTitle(this.audioList[0].title)
this.currentElement = this.audioList[0]
this.addUserNavigationHandling()
this.musicPatch = importedMusicPatcher
this.noisePatch = importedNoisePatcher
},
methods: {
fadeOutGains () {
// Define the duration of the fade out
const fadeDuration = 2.0 // 2 seconds for fade out
const currentTime = this.audioContext.currentTime
const fadeEndTime = currentTime + fadeDuration
this.fading = true
if (this.createdNodes.noiseGain) {
// Cancel scheduled values to clear any previous scheduled changes
this.createdNodes.noiseGain.gain.cancelScheduledValues(currentTime)
// Set the current value
this.createdNodes.noiseGain.gain.setValueAtTime(this.createdNodes.noiseGain.gain.value, currentTime)
// Schedule the fade out
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(0, fadeEndTime)
}
if (this.createdNodes.musicGain) {
// Cancel scheduled values to clear any previous scheduled changes
this.createdNodes.musicGain.gain.cancelScheduledValues(currentTime)
// Set the current value
this.createdNodes.musicGain.gain.setValueAtTime(this.createdNodes.musicGain.gain.value, currentTime)
// Schedule the fade out
this.createdNodes.musicGain.gain.linearRampToValueAtTime(0, fadeEndTime)
}
setTimeout(() => {
this.fading = false
}, fadeDuration * 1000)
},
refreshAudioContext () {
const newAudioContext = new AudioContext()
this.audioContext.close()
useAudioStore().audioContext = newAudioContext
this.audioContext = useAudioStore().getContext()
},
changeMusicVolume (newValue) {
if (!this.createdNodes.musicGain.gain) { return }
// useNuxtApp().$logger.log(this.createdNodes.musicGain.gain)
this.createdNodes.musicGain.gain.cancelScheduledValues(this.createdNodes.musicGain.context.currentTime)
this.createdNodes.musicGain.gain.setValueAtTime(newValue, this.createdNodes.musicGain.context.currentTime + 0.01)
},
changeNoiseVolume (newValue) {
if (!this.createdNodes.noiseGain.gain) { return }
this.createdNodes.noiseGain.gain.setValueAtTime(newValue, this.createdNodes.noiseGain.context.currentTime + 0.01)
// this.createdNodes.noiseGain.gain.value = newValue / 100
},
addUserNavigationHandling () {
if ('mediaSession' in navigator) {
// Set the handler for the next track action
navigator.mediaSession.setActionHandler('nexttrack', () => {
this.skip('next')
})
// Set the handler for the previous track action
navigator.mediaSession.setActionHandler('previoustrack', () => {
this.skip('previous')
})
}
},
getSong (currentTitle, direction) {
const index = this.audioList.findIndex(song => song.title === currentTitle)
let adjacentIndex = index + (direction === 'next' ? 1 : -1)
// Loop back to the first song if 'next' goes beyond the last index
if (adjacentIndex >= this.audioList.length) {
adjacentIndex = 0
} else if (adjacentIndex < 0) {
adjacentIndex = this.audioList.length - 1
}
return this.audioList[adjacentIndex]
},
skip (direction) {
const nextSong = this.getSong(this.selectedAudioTitle, direction)
this.selectAudioByTitle(nextSong.title)
},
updateCurrentElement (title) {
this.currentElement = this.audioList.find(e => e.title === title)
},
selectAudioByTitle (title) {
this.selectedAudioTitle = title
// const audioElement = this.audioList.find(e => e.title === this.selectedAudioTitle)
this.currentElement = this.audioList[this.audioList.findIndex(e => e.title === title)]
// useNuxtApp().$logger.log('currentElement: ', this.selectedAudioTitle)
},
areAllNodesAvailable () {
// List of required node keys as you've named them in this.createdNodes
const requiredNodes = [
'musicSource', 'musicSplitter', 'microphoneSource',
'musicDevice', 'outputSplitter', 'musicGain',
'noiseSource', 'noiseSplitter', 'noiseDevice',
'noiseGain', 'merger'
]
// Check if each required node exists and is not undefined in this.createdNodes
return requiredNodes.every(nodeKey => this.createdNodes[nodeKey] !== undefined)
},
connectNodes () {
// Destructure for easier access
const {
musicSource, musicSplitter, microphoneSource,
musicDevice, outputSplitter, musicGain,
noiseSource, noiseSplitter, noiseDevice,
noiseGain, merger
} = this.createdNodes
// Assuming all nodes are created and references to them are correct
// Connect music source to splitter (Stereo to 2 Channels)
musicSource.connect(musicSplitter) // 2 channels: L, R
// Connect microphone to musicDevice input 0 (Mono)
microphoneSource.connect(musicDevice, 0, 0) // 1 channel: Microphone
// Connect musicSplitter to musicDevice inputs 1 and 2 (Stereo)
musicSplitter.connect(musicDevice, 0, 1) // 1 channel: Left
musicSplitter.connect(musicDevice, 1, 2) // 1 channel: Right
// Connect musicDevice to outputSplitter (Stereo out)
musicDevice.connect(outputSplitter) // Assuming musicDevice outputs stereo
// Optionally connect musicDevice to musicGain for additional gain control (Stereo)
musicDevice.connect(musicGain) // Assuming musicDevice outputs stereo, connected to both channels of musicGain
// Connect noise source to noiseSplitter (Stereo to 2 Channels)
noiseSource.connect(noiseSplitter) // 2 channels: L, R
// Connect microphone to noiseDevice input 0 (Mono)
microphoneSource.connect(noiseDevice, 0, 0) // 1 channel: Microphone
// Connect noiseSplitter to noiseDevice inputs 1 and 2 (Stereo)
noiseSplitter.connect(noiseDevice, 0, 1) // 1 channel: Left
noiseSplitter.connect(noiseDevice, 1, 2) // 1 channel: Right
// Connect outputSplitter to noiseDevice inputs 3 and 4 (Stereo from musicDevice)
outputSplitter.connect(noiseDevice, 0, 3) // 1 channel: Left from musicDevice
outputSplitter.connect(noiseDevice, 1, 4) // 1 channel: Right from musicDevice
// Assuming noiseDevice outputs stereo, connect to noiseGain (Stereo)
noiseDevice.connect(noiseGain) // Assuming noiseDevice outputs stereo, connected to both channels of noiseGain
// Assuming you want to merge and output both processed signals (Stereo from both musicGain and noiseGain)
// Connect noiseGain to the first two inputs of the merger (Stereo)
noiseGain.connect(merger, 0, 0) // 1 channel: Left
noiseGain.connect(merger, 0, 1) // 1 channel: Right
// Connect musicGain to the last two inputs of the merger (Stereo)
musicGain.connect(merger, 0, 2) // 1 channel: Left
musicGain.connect(merger, 0, 3) // 1 channel: Right
// Finally, connect the merger to the audio context's destination (Stereo to output)
merger.connect(merger.context.destination)
},
handlePlayingUpdate (isPlaying) {
// useNuxtApp().$logger.log('handling Playing update: ' + isPlaying + ' <-- isplaying' + this.audioContext)
const ContextClass = (window.AudioContext || window.webkitAudioContext || window.mozAudioContext || window.oAudioContext || window.msAudioContext)
let audioContext = this.audioContext ? this.audioContext : new ContextClass()
let microphoneSource = null
if (isPlaying) {
// Web Audio API is available.
// const musicAudioComponent = this.$refs[this.currentElement.title].audioElement
if (this.areAllNodesAvailable()) {
this.disconnectNodes()
// reconnecting because all Nodes are there
this.connectNodes()
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(0, this.audioContext.currentTime)
this.createdNodes.musicGain.gain.linearRampToValueAtTime(0, this.audioContext.currentTime)
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(1.0, this.audioContext.currentTime + 2)
this.createdNodes.musicGain.gain.linearRampToValueAtTime(1.0, this.audioContext.currentTime + 3)
// useNuxtApp().$logger.log('Connected everything because it was there already')
} else {
navigator.mediaDevices.getUserMedia({
audio: {
echoCancellation: false,
noiseSuppression: false,
autoGainControl: false
},
video: false
})
.then((micStream) => {
audioContext = this.audioContext
microphoneSource = audioContext.createMediaStreamSource(micStream)
this.createdNodes.microphoneSource ||= microphoneSource
// Return both audioContext and microphoneSource to the next link in the chain
return { audioContext, microphoneSource }
})
.then(({ audioContext, microphoneSource }) => {
// First RNBO device creation
return createRNBODevice(audioContext, this.noisePatch).then((noiseRNBODevice) => {
// useNuxtApp().$logger.log('Noise RNBODEVICE created: with ' + noiseRNBODevice.node.numInputChannels + ' inputs and ' + noiseRNBODevice.node.numOutputChannels + ' outputs')
this.createdNodes.noiseRNBODevice ||= noiseRNBODevice
// Return al.then(() => {hen(() => necessary objects for the next step
return { audioContext, microphoneSource, noiseRNBODevice }
})
})
.then(({ audioContext, microphoneSource, noiseRNBODevice }) => {
// Second RNBO device creation
return createRNBODevice(audioContext, this.musicPatch).then((musicRNBODevice) => {
// useNuxtApp().$logger.log('Music RNBODEVICE created: with ' + musicRNBODevice.node.numInputChannels + ' inputs and ' + musicRNBODevice.node.numOutputChannels + ' outputs')
// Return all necessary objects for the final setup
this.createdNodes.musicRNBODevice ||= musicRNBODevice
return { audioContext, microphoneSource, noiseRNBODevice, musicRNBODevice }
})
}).then(({ audioContext, microphoneSource, noiseRNBODevice, musicRNBODevice }) => {
// Ensure this.createdNodes is initialized
this.createdNodes ||= {}
// Assign nodes if they don't exist
this.createdNodes.microphoneSource ||= microphoneSource
this.createdNodes.musicDevice ||= musicRNBODevice.node
this.createdNodes.noiseDevice ||= noiseRNBODevice.node
this.createdNodes.noiseSource ||= audioContext.createMediaElementSource(this.$refs.Noise.$refs.audioElement)
this.createdNodes.musicSource ||= audioContext.createMediaElementSource(this.$refs[this.currentElement.title].$refs.audioElement)
this.createdNodes.musicSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.noiseSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.outputSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.musicGain ||= audioContext.createGain()
this.createdNodes.noiseGain ||= audioContext.createGain()
this.createdNodes.merger ||= audioContext.createChannelMerger(4)
this.createdNodes.musicGain.gain.value = 0.001
this.createdNodes.noiseGain.gain.value = 0.001 // macht nichts
this.connectNodes()
setTimeout(() => {
this.createdNodes.musicGain.gain.exponentialRampToValueAtTime(0.5, audioContext.currentTime + 3)
this.createdNodes.noiseGain.gain.exponentialRampToValueAtTime(0.5, audioContext.currentTime + 4)
}, 150)
})
.catch((_error) => {
this.disconnectNodes()
this.refreshAudioContext()
if (this.audioContext) {
this.audioContext.destination.disconnect()
// audioContext.destination = null
}
// useNuxtApp().$logger.error('Error setting up audio:', error)
})
}
} else {
this.disconnectNodes()
this.refreshAudioContext()
}
},
disconnectNodes () {
// Ensure this.createdNodes is defined and is an object
if (typeof this.createdNodes === 'object' && this.createdNodes !== null) {
Object.values(this.createdNodes).forEach((node) => {
// Check if the node exists and has a disconnect method
if (node && typeof node.disconnect === 'function') {
node.disconnect()
}
// Additional handling for specific types of nodes, if necessary
// For example, stopping an OscillatorNode
if (node instanceof OscillatorNode) {
node.stop()
}
})
}
}
}
}
</script>
<style scoped>
.rnboplayer {
position: fixed;
width: 220px; /* Or specify a fixed width like 220px if you prefer */
max-width: 220px; /* This line might be redundant depending on your width strategy */
height: 100px;
display: inline-grid;
z-index: 2;
bottom: 11%;
left: 0;
right: 0;
margin-left: auto;
margin-right: auto;
margin-bottom: 1em;
}
</style>

View File

@ -1,498 +0,0 @@
<template>
<div class="rnboplayer">
<button v-if="preparedForWebAudio" @click="skip('next')">
skip sound
</button>
<button v-if="preparedForWebAudio" @click="skip('previous')">
previous sound
</button>
<button v-if="preparedForWebAudio" @click="mute">
mute AudioElements
</button>
<button v-if="preparedForWebAudio" @click="connectMusicDevice">
connect Music
</button>
<button v-if="preparedForWebAudio" @click="connectNoiseNode">
connect Noise & Music
</button>
<button v-if="preparedForWebAudio" @click="disconnectNodes">
disconnect Nodes
</button>
<button v-if="preparedForWebAudio" @click="disconnectMusicNodes">
disconnnect Music Nodes
</button>
<AudioElement
ref="Noise"
key="5"
:src="sourceNoiseURL"
title="Noise"
@volume:update="changeNoiseVolume"
>
<template #default="{}">
<!-- tropic icon -->
<img style="width: 25px" src="~/assets/image/noiseicon.svg">
</template>
<!-- Slot content for AudioElement, if needed -->
</AudioElement>
<AudioElement
v-if="selectedAudioTitle"
:ref="selectedAudio.title"
:key="selectedAudio.id"
:src="selectedAudio.src"
:title="selectedAudio.title"
@update:playing="handlePlayingUpdate"
@update:paused="handlePausedUpdate"
@volume:update="changeMusicVolume"
>
<template #default="{ }">
<div class="icon">
<!-- tropic icon -->
<img style="width: 25px" src="~/assets/image/musicicon.svg">
</div>
</template>
<!-- Slot content for AudioElement, if needed -->
</AudioElement>
</div>
</template>
<script>
import AudioElement from '@/components/experiments/AudioElement.vue'
import importedMusicPatcher from '@/assets/patch/music_patch.export.json'
import importedNoisePatcher from '@/assets/patch/noise_patch.export.json'
import importedTestPatcher from '@/assets/patch/test_patch.export.json'
import { createRNBODevice, getAttenuationFactor } from '@/lib/AudioFunctions'
import { useAudioStore } from '~/stores/audio'
import { useUserStore } from '~/stores/user'
export default {
components: {
AudioElement
},
props: {
soundscape: {
type: String,
default: 'Lagoon'
}
},
data () {
return {
audioList: [
{ id: 1, title: 'Lagoon', src: window.location.origin + '/sounds/lagoon.m4a' },
{ id: 2, title: 'Forest', src: window.location.origin + '/sounds/forest.m4a' },
{ id: 3, title: 'Meadow', src: window.location.origin + '/sounds/meadow.m4a' },
{ id: 4, title: 'Tropics', src: window.location.origin + '/sounds/tropics.m4a' }
],
noise: { id: 5, title: 'Noise', src: window.location.origin + '/masking/noise.flac' },
selectedAudioTitle: '',
currentElement: {},
createdNodes: {},
createdNodesCount: 0,
noiseRNBODevice: {},
musicPatch: importedMusicPatcher,
noisePatch: importedTestPatcher,
audioContext: null,
toggleMute: false,
readyPlayer: false,
createMusicNodeCounter: 0
}
},
computed: {
selectedAudio () {
return this.audioList.find(audio => audio.title === this.selectedAudioTitle) || null
},
isPlaying () {
return useAudioStore().isPlaying()
},
preparedForWebAudio () {
return this.areAllNodesAvailable()
},
sourceNoiseURL () {
return window.location.origin + '/masking/noise.flac'
}
},
onRenderTriggered () {
// useNuxtApp().$logger.log('render AudioNoiseTest-' + props.title)
},
watch: {
createdNodesCount: {
handler () {
if (this.areAllNodesAvailable()) {
// useNuxtApp().$logger.log('we are complete with the node generation , lets start the noise once ')
this.connectNoiseNode()
}
},
deep: true
}
},
beforeUnmount () {
this.disconnectNodes()
if (this.audioContext) { this.audioContext.close() }
this.audioContext = null
},
beforeMount () {
this.audioContext = useAudioStore().getContext()
},
mounted () {
this.selectAudioByTitle(this.audioList[0].title)
this.currentElement = this.audioList[0]
this.addUserNavigationHandling()
this.musicPatch = importedMusicPatcher
this.noisePatch = importedNoisePatcher
this.testPatch = importedTestPatcher
},
methods: {
unmute () {
const musicAudioElement = this.$refs[this.currentElement.title].audioElement
const noiseAudioElement = this.$refs.Noise.audioElement
noiseAudioElement.volume = 1
musicAudioElement.volume = 1
musicAudioElement.muted = false
noiseAudioElement.muted = false
const musicGain = this.createdNodes.musicGain
const audioContext = this.createdNodes.musicGain.context
const noiseGain = this.createdNodes.musicGain
if (musicGain.gain.value === 0) { musicGain.gain.linearRampToValueAtTime(1, audioContext.currentTime + 10) }
noiseGain.gain.linearRampToValueAtTime(1, audioContext.currentTime + 10)
useAudioStore().setPlaying(false)
},
changeMusicVolume (newValue) {
if (!this.createdNodes.musicGain.gain) { return }
// useNuxtApp().$logger.log(this.createdNodes.musicGain.gain)
this.createdNodes.musicGain.gain.cancelScheduledValues(this.createdNodes.musicGain.context.currentTime)
this.createdNodes.musicGain.gain.linearRampToValueAtTime(newValue / 100, this.createdNodes.musicGain.context.currentTime + 0.30)
},
changeNoiseVolume (newValue) {
const noiseGain = this.createdNodes.noiseGain
if (!noiseGain) { return }
noiseGain.gain.cancelScheduledValues(noiseGain.context.currentTime)
noiseGain.gain.linearRampToValueAtTime(newValue / 100, noiseGain.context.currentTime + 0.30)
},
addUserNavigationHandling () {
if ('mediaSession' in navigator) {
// Set the handler for the next track action
navigator.mediaSession.setActionHandler('nexttrack', () => {
this.skip('next')
})
// Set the handler for the previous track action
navigator.mediaSession.setActionHandler('previoustrack', () => {
this.skip('previous')
})
}
},
getSong (currentTitle, direction) {
const index = this.audioList.findIndex(song => song.title === currentTitle)
let adjacentIndex = index + (direction === 'next' ? 1 : -1)
// Loop back to the first song if 'next' goes beyond the last index
if (adjacentIndex >= this.audioList.length) {
adjacentIndex = 0
} else if (adjacentIndex < 0) {
adjacentIndex = this.audioList.length - 1
}
return this.audioList[adjacentIndex]
},
skip (_direction) {
// useNuxtApp().$logger.log('fade out the song')
const musicGain = this.createdNodes.musicGain
if (musicGain instanceof GainNode) {
musicGain.gain.setValueAtTime(musicGain.gain.value, this.createdNodes.musicGain.context.currentTime)
musicGain.gain.linearRampToValueAtTime(0, musicGain.context.currentTime + 2)
// useNuxtApp().$logger.log('volume of musicAudio ' + musicAudioElement.volume)
}
setTimeout(() => {
// useNuxtApp().$logger.log('gain of musicAudio ' + musicGain.gain.value)
}, 6000)
// useNuxtApp().$logger.log(nextSong)
/*
setTimeout(() => {
// useNuxtApp().$logger.log('replace the song')
this.selectAudioByTitle(nextSong.title)
}, 2000) */
},
selectAudioByTitle (title) {
this.selectedAudioTitle = title // update the selected Audio what is a reactive prop that triggers rerenderin the Audioelement
setTimeout(() => {
this.replaceMusicNode()
// this line triggers a rerender of the noise audio element what is not what we want
this.currentElement = this.audioList[this.audioList.findIndex(e => e.title === title)]
}, 100)
},
areAllNodesAvailable () {
// List of required node keys as you've named them in this.createdNodes
const requiredNodes = [
'musicSource', 'musicSplitter', 'microphoneSource',
'musicDevice', 'outputSplitter', 'musicGain',
'noiseSource', 'noiseSplitter', 'noiseDevice',
'noiseGain', 'merger'
]
// Check if each required node exists and is not undefined in this.createdNodes
return requiredNodes.every(nodeKey => this.createdNodes[nodeKey] !== undefined)
},
connectMusicDevice () {
const { musicDevice, microphoneSource, musicSplitter, musicSource, musicGain, outputSplitter } = this.createdNodes
// useNuxtApp().$logger.log('connect Music device')
musicSource.connect(musicSplitter, 0, 0)
// musicSource.connect(musicSource.context.destination)
microphoneSource.connect(musicDevice, 0, 0)
musicSplitter.connect(musicDevice, 0, 1)
musicSplitter.connect(musicDevice, 1, 2)
musicDevice.connect(musicGain)
musicDevice.connect(outputSplitter)
musicGain.connect(this.audioContext.destination)
// useNuxtApp().$logger.log('Music device connected')
musicGain.gain.value = 1
},
//
// hier Version 2 testen Noise und Music rein, dann Gain
//
//
// hier Version 1 testen Noise und Music Mergen dann Gain
//
connectNoiseNode () {
// useNuxtApp().$logger.log('CONNECT NOISE NODE')
const {
noiseSplitter, microphoneSource, noiseDevice,
musicSplitter, noiseSource, noiseGain
} = this.createdNodes
this.connectMusicDevice()
noiseSource.connect(noiseSplitter, 0, 0)
microphoneSource.connect(noiseDevice, 0, 0)
noiseSplitter.connect(noiseDevice, 0, 1)
noiseSplitter.connect(noiseDevice, 1, 2) // 2 channels
musicSplitter.connect(noiseDevice, 0, 3)
musicSplitter.connect(noiseDevice, 1, 4)
const attenuation = this.noiseRNBODevice.parametersById.get('attenuation')
attenuation.value = getAttenuationFactor()
noiseDevice.connect(noiseGain)
noiseGain.connect(noiseGain.context.destination)
setTimeout(() => {
this.unmute()
}, 300)
},
connectNoiseNodeOrginal () {
// useNuxtApp().$logger.log('CONNECT NOISE NODE')
const {
noiseSplitter, microphoneSource, noiseDevice,
outputSplitter, noiseSource, noiseGain
} = this.createdNodes
this.connectMusicDevice()
noiseSource.connect(noiseSplitter, 0, 0)
microphoneSource.connect(noiseDevice, 0, 0)
noiseSplitter.connect(noiseDevice, 0, 1)
noiseSplitter.connect(noiseDevice, 1, 2) // 2 channels
outputSplitter.connect(noiseDevice, 0, 3)
outputSplitter.connect(noiseDevice, 1, 4)
const attenuation = this.noiseRNBODevice.parametersById.get('attenuation')
attenuation.value = getAttenuationFactor()
noiseDevice.connect(noiseGain)
noiseGain.connect(noiseGain.context.destination)
setTimeout(() => {
this.unmute()
}, 300)
},
handleLoadingUpdate () {
// useNuxtApp().$logger.log('Loaded Audio File we are ready to connect everything')
},
handlePlayingUpdate (isPlaying) {
// useNuxtApp().$logger.log('prepared AudioNodes ' + this.createMusicNodeCounter)
if (!isPlaying) { this.disconnectNodes() } else if (this.areAllNodesAvailable()) {
this.replaceMusicNode()
} else {
// Whatever was before, reset
if (this.createdNodes.length > 0) {
this.createdNodes.musicGain.gain.linearRampToValueAtTime(0, this.audioContext.currentTime + 2)
this.disconnectNodes()
}
this.prepareWebAudioNodes()
}
},
prepareWebAudioNodes () {
this.createMusicNodeCounter++
const userStore = useUserStore()
const isAuth = userStore.isAuthenticated
// Create new Context there might be some nodes missing.
// const ContextClass = (window.AudioContext || window.webkitAudioContext || window.mozAudioContext || window.oAudioContext || window.msAudioContext)
// useNuxtApp().$logger.log('user', { userStore }, { isAuth })
if (!isAuth) {
// useNuxtApp().$logger.error('User no logged in ')
return
}
// Take the microphone and all the other nodes and make them available for mindbooost
navigator.mediaDevices.getUserMedia({
audio: {
echoCancellation: false,
noiseSuppression: false,
autoGainControl: false
},
video: false
}).then((micStream) => {
const audioContext = this.audioContext ? this.audioContext : useAudioStore().getContext()
const microphoneSource = audioContext.createMediaStreamSource(micStream)
this.createdNodes.microphoneSource ||= microphoneSource
// Return both audioContext and microphoneSource to the next link in the chain
return { audioContext, microphoneSource }
})
.then(({ audioContext, microphoneSource }) => {
// First RNBO device creation
return createRNBODevice(audioContext, this.testPatch).then((noiseRNBODevice) => {
this.createdNodes.noiseRNBODevice ||= noiseRNBODevice
return { audioContext, microphoneSource, noiseRNBODevice }
})
})
.then(({ audioContext, microphoneSource, noiseRNBODevice }) => {
// Second RNBO device creation
return createRNBODevice(audioContext, this.musicPatch).then((musicRNBODevice) => {
this.createdNodes.musicRNBODevice ||= musicRNBODevice
return { audioContext, microphoneSource, noiseRNBODevice, musicRNBODevice }
})
}).then(({ audioContext, microphoneSource, noiseRNBODevice, musicRNBODevice }) => {
this.createdNodes ||= {}
this.noiseRNBODevice = noiseRNBODevice
const noiseAudioElement = this.$refs.Noise.$refs.audioElement
const musicAudioElement = this.$refs[this.currentElement.title].$refs.audioElement
// Assign nodes if they don't exist
this.createdNodes.microphoneSource ||= microphoneSource
this.createdNodes.musicDevice ||= musicRNBODevice.node
this.createdNodes.noiseDevice ||= noiseRNBODevice.node
this.createdNodes.noiseSource ||= audioContext.createMediaElementSource(noiseAudioElement)
this.createdNodes.musicSource ||= audioContext.createMediaElementSource(musicAudioElement)
this.createdNodes.musicSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.noiseSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.outputSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.musicGain ||= audioContext.createGain()
this.createdNodes.noiseGain ||= audioContext.createGain()
this.createdNodes.merger ||= audioContext.createChannelMerger(4)
this.createdNodesCount = this.createdNodes.length
})
},
replaceMusicNode () {
if (this.areAllNodesAvailable()) {
const { musicSource, musicGain } = this.createdNodes
setTimeout(() => {
const musicAudioElement = this.$refs[this.currentElement.title].$refs.audioElement
const currentGain = musicGain.gain.value
const newMusicNode = musicSource.context.createMediaElementSource(musicAudioElement)
musicSource.disconnect()
this.connectMusicDevice()
this.createdNodes.musicSource = newMusicNode
// newMusicNode.connect(musicSplitter)
musicGain.gain.linearRampToValueAtTime(currentGain, this.audioContext.currentTime + 2)
}, 2100)
}
},
disconnectMusicNodes () {
const { musicDevice, musicSplitter, musicSource, outputSplitter, musicGain, noiseGain } = this.createdNodes
// useNuxtApp().$logger.log('disconnect Music device')
if (this.areAllNodesAvailable()) {
if (musicGain instanceof GainNode) { musicGain.gain.linearRampToValueAtTime(0, this.audioContext.currentTime + 2) }
if (noiseGain instanceof GainNode) { noiseGain.gain.linearRampToValueAtTime(noiseGain.gain.value, this.audioContext.currentTime) }
setTimeout(() => {
musicSource.disconnect()
musicSplitter.disconnect()
musicDevice.disconnect()
outputSplitter.disconnect()
// useNuxtApp().$logger.log('Music device disconnected')
musicSource.disconnect()
}, 2050)
} else {
// useNuxtApp().$logger.log('No nodes present to disconnect')
}
},
disconnectNodes () {
// useNuxtApp().$logger.log('DISCONNECT ALL NODES')
if (this.createdNodes.microphoneSource) { this.createdNodes.microphoneSource.mediaStream.getTracks().forEach(track => track.stop()) }
// Ensure this.createdNodes is defined and is an object
if (typeof this.createdNodes === 'object' && this.createdNodes !== null) {
Object.values(this.createdNodes).forEach((node) => {
// Check if the node exists and has a disconnect method
if (node && typeof node.disconnect === 'function') {
node.disconnect()
}
// Additional handling for specific types of nodes, if necessary
// For example, stopping an OscillatorNode
if (node instanceof OscillatorNode) {
node.stop()
}
})
}
useAudioStore().setPlaying(false)
}
}
}
</script>
<style scoped>
.rnboplayer {
position: fixed;
width: 220px; /* Or specify a fixed width like 220px if you prefer */
max-width: 220px; /* This line might be redundant depending on your width strategy */
height: fit-content;
display: inline-grid;
z-index: 2;
bottom: 11%;
left: 0;
right: 0;
margin-left: auto;
margin-right: auto;
margin-bottom: 1em;
}
.play.yellow {
background: rgba(255, 176, 66, 0.008);
border-radius: 50%;
box-shadow: 0 0 0 0 rgba(255, 177, 66, 1);
animation: pulse-yellow 4s infinite;
position: fixed;
bottom: 40%;
width: 200px;
height: 200px;
background-image: url('/images/playbtn.svg');
background-repeat:no-repeat;
background-attachment:fixed;
background-position: 58% 55%;
}
.pause.yellow {
background: rgba(255, 176, 66, 0.008);
border-radius: 50%;
box-shadow: 0 0 0 0 rgba(255, 177, 66, 1);
opacity: 0.05;
position: fixed;
bottom: 40%;
width: 200px;
height: 200px;
background-image: url('/images/pausebtn.svg');
background-size: 130px 100px;
background-repeat:no-repeat;
background-attachment:fixed;
background-position: center;
}
.pause.yellow:hover{
opacity: 0.5;
}
@keyframes pulse-yellow {
0% {
transform: scale(0.95);
box-shadow: 0 0 0 0 rgba(255, 177, 66, 0.7);
}
70% {
transform: scale(1);
box-shadow: 0 0 0 10px rgba(255, 177, 66, 0);
}
100% {
transform: scale(0.95);
box-shadow: 0 0 0 0 rgba(255, 177, 66, 0);
}
}
</style>

View File

@ -1,606 +0,0 @@
<template>
<div class="rnboplayer">
<button v-if="preparedForWebAudio" @click="fixAudio">
fix Audio
</button>
<button v-if="preparedForWebAudio" @click="skip('next')">
skip sound
</button>
<button v-if="preparedForWebAudio" @click="skip('previous')">
previous sound
</button>
<button v-if="preparedForWebAudio" @click="mute">
mute AudioElements
</button>
<button v-if="preparedForWebAudio" @click="connectMusicDevice">
connect Music
</button>
<button v-if="preparedForWebAudio" @click="connectNoiseNode">
connect Noise
</button>
<button v-if="preparedForWebAudio" @click="connectNodes">
connect Nodes
</button>
<button v-if="preparedForWebAudio" @click="disconnectMusicNodes">
disconnnect Music Nodes
</button>
<div v-if="selectedAudioTitle">
<AudioElement
:ref="selectedAudio.title"
:key="selectedAudio.id"
:src="selectedAudio.src"
:title="selectedAudio.title"
@update:playing="handlePlayingUpdate"
@volume="changeMusicVolume"
>
<template #default="{ play, pause }">
<div class="icon">
<!-- tropic icon -->
<img style="width: 25px" src="~/assets/image/musicicon.svg">
</div>
<div style="display:none">
<button v-if="!isPlaying()" @click="play">
Play
</button>
<button v-else @click="pause">
Pause
</button>
</div>
</template>
<!-- Slot content for AudioElement, if needed -->
</AudioElement>
</div>
<AudioElement
ref="Noise"
key="5"
:src="sourceNoiseURL"
title="Noise"
@volume="changeNoiseVolume"
>
<template #default="{ play, pause }">
<div class="icon">
<!-- tropic icon -->
<img style="width: 25px" src="~/assets/image/noiseicon.svg">
</div>
<div style="display:none">
<button v-if="!isPlaying()" @click="play">
Play
</button>
<button v-else @click="pause">
Pause
</button>
</div>
</template>
</AudioElement>
</div>
</template>
<script>
import AudioElement from '@/components/experiments/AudioElement.vue'
import importedMusicPatcher from '@/assets/patch/music_patch.export.json'
import importedNoisePatcher from '@/assets/patch/noise_patch.export.json'
import importedTestPatcher from '@/assets/patch/test_patch.export.json'
import { createRNBODevice } from '@/lib/AudioFunctions'
import { ANC } from '~/stores/interfaces/ANC'
import { HeadsetType } from '~/stores/interfaces/HeadsetType'
import { useUserStore } from '~/stores/user'
function getAttenuationFactor (anc, ht) {
useUserStore()
if (anc === ANC.Yes && ht === HeadsetType.OverEar) { return 0.0562 }
if (anc === ANC.No && ht === HeadsetType.OverEar) { return 0.5623 }
if (anc === ANC.Yes && ht === HeadsetType.InEar) { return 0.1778 }
if (anc === ANC.No && ht === HeadsetType.InEar) { return 0.0316 }
return 0.5623
}
export default {
components: {
AudioElement
},
props: {
soundscape: {
type: String,
default: 'Lagoon'
}
},
data () {
return {
audioList: [
{ id: 1, title: 'Lagoon', src: window.location.origin + '/sounds/lagoon.m4a' },
{ id: 2, title: 'Forest', src: window.location.origin + '/sounds/forest.m4a' },
{ id: 3, title: 'Meadow', src: window.location.origin + '/sounds/meadow.m4a' },
{ id: 4, title: 'Tropics', src: window.location.origin + '/sounds/tropics.m4a' }
],
noise: { id: 5, title: 'Noise', src: window.location.origin + '/masking/noise.flac' },
selectedAudioTitle: '',
currentElement: {},
createdNodes: {},
musicPatch: importedMusicPatcher,
noisePatch: importedNoisePatcher,
audioContext: null,
toggleMute: false
}
},
computed: {
selectedAudio () {
return this.audioList.find(audio => audio.title === this.selectedAudioTitle) || null
},
isPlaying () {
return useAudioStore().isPlaying()
},
preparedForWebAudio () {
return this.areAllNodesAvailable()
},
sourceNoiseURL () {
return window.location.origin + '/masking/noise.flac'
}
},
onRenderTriggered () {
// useNuxtApp().$logger.log('render AudioNoiseTest-----------' + props.title)
},
beforeUnmount () {
this.disconnectNodes()
if (this.audioContext) { this.audioContext.close() }
this.audioContext = null
},
beforeMount () {
this.audioContext = new AudioContext()
},
mounted () {
this.selectAudioByTitle(this.audioList[0].title)
this.currentElement = this.audioList[0]
this.addUserNavigationHandling()
this.musicPatch = importedMusicPatcher
this.noisePatch = importedNoisePatcher
this.testPatch = importedTestPatcher
},
methods: {
fixAudio () {
// useNuxtApp().$logger.log('fixAudio')
// useNuxtApp().$logger.log(' ||||| noiseAudioElement ', { noiseAudioElement }, ' musicAudioElement ', { musicAudioElement })
// useNuxtApp().$logger.log('||||| volume noise ' + noiseAudioElement.volume + ' volume music ' + musicAudioElement.volume)
// useNuxtApp().$logger.log('||||| playstate noise ' + noiseAudioElement.autoplay + ' playstate music ' + musicAudioElement.autoplay)
// useNuxtApp().$logger.log('||||| muted noise ' + noiseAudioElement.muted + ' playstate music ' + musicAudioElement.muted)
// useNuxtApp().$logger.log('||||| gain noise ' + this.createdNodes.noiseGain.gain.value + ' gain music ' + this.createdNodes.musicGain.gain.value)
this.connectToGainToDestination()
this.unmute()
// this.connectDirectToDestination() // brings the sound back
},
connectDirectToDestination () {
this.createdNodes.noiseSource.connect(this.createdNodes.noiseSource.context.destination)
this.createdNodes.musicSource.connect(this.createdNodes.musicSource.context.destination)
},
connectToGainToDestination () {
this.createdNodes.noiseSource.connect(this.createdNodes.noiseGain)
this.createdNodes.musicSource.connect(this.createdNodes.musicGain)
this.createdNodes.noiseGain.gain.setValueAtTime(0, this.createdNodes.noiseGain.context.currentTime)
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(0.8, this.createdNodes.noiseGain.context.currentTime + 3)
this.createdNodes.musicGain.gain.setValueAtTime(0, this.createdNodes.noiseGain.context.currentTime)
this.createdNodes.musicGain.gain.linearRampToValueAtTime(0.8, this.createdNodes.musicGain.context.currentTime + 3)
this.createdNodes.noiseGain.connect(this.createdNodes.noiseGain.context.destination)
this.createdNodes.musicGain.connect(this.createdNodes.musicGain.context.destination)
},
mute () {
// Assuming gainNode is an instance of GainNode and audioContext is your AudioContext
let newValue = 0.5 // The new gain value you want to ramp to
const rampDuration = 0.1 // Duration of the ramp in seconds
const currentTime = this.audioContext.currentTime
if (this.toggleMute) {
newValue = 0
const noiseAudioElement = this.$refs[this.currentElement.title].audioElement
const musicAudioElement = this.$refs.Noise.audioElement
noiseAudioElement.volume = 0
musicAudioElement.volume = 0
musicAudioElement.muted = true
noiseAudioElement.muted = true
this.createdNodes.musicGain.gain.setValueAtTime(this.createdNodes.musicGain.gain.value, currentTime) // Start at the current value
this.createdNodes.musicGain.gain.linearRampToValueAtTime(newValue, currentTime + rampDuration)
this.createdNodes.noiseGain.gain.setValueAtTime(this.createdNodes.noiseGain.gain.value, currentTime) // Start at the current value
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(newValue, currentTime + rampDuration)
this.toggleMute = false
} else {
newValue = 1
this.toggleMute = true
const noiseAudioElement = this.$refs[this.currentElement.title].audioElement
const musicAudioElement = this.$refs.Noise.audioElement
noiseAudioElement.volume = 1
musicAudioElement.volume = 1
musicAudioElement.muted = false
noiseAudioElement.muted = false
this.createdNodes.musicGain.gain.setValueAtTime(this.createdNodes.musicGain.gain.value, currentTime) // Start at the current value
this.createdNodes.musicGain.gain.linearRampToValueAtTime(newValue, currentTime + rampDuration)
this.createdNodes.noiseGain.gain.setValueAtTime(this.createdNodes.noiseGain.gain.value, currentTime) // Start at the current value
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(newValue, currentTime + rampDuration)
}
},
unmute () {
const noiseAudioElement = this.$refs[this.currentElement.title].audioElement
const musicAudioElement = this.$refs.Noise.audioElement
noiseAudioElement.volume = 1
musicAudioElement.volume = 1
musicAudioElement.muted = false
noiseAudioElement.muted = false
},
changeMusicVolume (newValue) {
if (!this.createdNodes.musicGain.gain) { return }
// useNuxtApp().$logger.log(this.createdNodes.musicGain.gain)
this.createdNodes.musicGain.gain.cancelScheduledValues(this.createdNodes.musicGain.context.currentTime)
this.createdNodes.musicGain.gain.linearRampToValueAtTime(newValue / 100, this.createdNodes.musicGain.context.currentTime + 0.30)
},
changeNoiseVolume (newValue) {
const noiseGain = this.createdNodes.noiseGain
if (!noiseGain) { return }
noiseGain.gain.cancelScheduledValues(noiseGain.context.currentTime)
noiseGain.gain.linearRampToValueAtTime(newValue / 100, noiseGain.context.currentTime + 0.30)
},
addUserNavigationHandling () {
if ('mediaSession' in navigator) {
// Set the handler for the next track action
navigator.mediaSession.setActionHandler('nexttrack', () => {
this.skip('next')
})
// Set the handler for the previous track action
navigator.mediaSession.setActionHandler('previoustrack', () => {
this.skip('previous')
})
}
},
getSong (currentTitle, direction) {
const index = this.audioList.findIndex(song => song.title === currentTitle)
let adjacentIndex = index + (direction === 'next' ? 1 : -1)
// Loop back to the first song if 'next' goes beyond the last index
if (adjacentIndex >= this.audioList.length) {
adjacentIndex = 0
} else if (adjacentIndex < 0) {
adjacentIndex = this.audioList.length - 1
}
return this.audioList[adjacentIndex]
},
skip (direction) {
const nextSong = this.getSong(this.selectedAudioTitle, direction)
this.selectAudioByTitle(nextSong.title)
},
updateCurrentElement (title) {
this.currentElement = this.audioList.find(e => e.title === title)
},
selectAudioByTitle (title) {
this.selectedAudioTitle = title
// const audioElement = this.audioList.find(e => e.title === this.selectedAudioTitle)
this.currentElement = this.audioList[this.audioList.findIndex(e => e.title === title)]
// useNuxtApp().$logger.log('refs ', refs)
// useNuxtApp().$logger.log('currentElement: ', this.selectedAudioTitle)
},
areAllNodesAvailable () {
// List of required node keys as you've named them in this.createdNodes
const requiredNodes = [
'musicSource', 'musicSplitter', 'microphoneSource',
'musicDevice', 'outputSplitter', 'musicGain',
'noiseSource', 'noiseSplitter', 'noiseDevice',
'noiseGain', 'merger'
]
// Check if each required node exists and is not undefined in this.createdNodes
return requiredNodes.every(nodeKey => this.createdNodes[nodeKey] !== undefined)
},
connectMusicNode () {
const {
microphoneSource, musicSplitter,
outputSplitter, musicSource, musicDevice
} = this.createdNodes
musicSource.connect(musicSplitter)
microphoneSource.connect(musicDevice, 0, 0)
musicSplitter.connect(musicDevice, 0, 1) // 1 channel: Left
musicSplitter.connect(musicDevice, 1, 2) // 1 channel: Right
musicDevice.connect(outputSplitter, 0, 0)
outputSplitter.connect(outputSplitter.context.destination)
},
async connectTestNode () {
const testPatch = await createRNBODevice(this.audioContext, importedTestPatcher)
const testPatchNode = testPatch.node
const {
musicSource, musicSplitter, noiseSource, noiseSplitter
} = this.createdNodes
musicSource.connect(musicSplitter)
noiseSource.connect(noiseSplitter)
noiseSplitter.connect(testPatchNode, 0, 1) // 1 channel: Left
noiseSplitter.connect(testPatchNode, 1, 2)
musicSplitter.connect(testPatchNode, 0, 3) // 1 channel: Left
musicSplitter.connect(testPatchNode, 1, 4) // 1 channel: Right
testPatchNode.connect(this.audioContext.destination)
},
connectMusicDevice () {
const { musicDevice, microphoneSource, musicSplitter, musicSource, musicGain, outputSplitter } = this.createdNodes
// useNuxtApp().$logger.log('connect Music device')
musicSource.connect(musicSplitter, 0, 0)
// musicSource.connect(musicSource.context.destination)
microphoneSource.connect(musicDevice, 0, 0)
musicSplitter.connect(musicDevice, 0, 1)
musicSplitter.connect(musicDevice, 1, 2)
musicDevice.connect(musicGain)
musicDevice.connect(outputSplitter)
musicGain.connect(this.audioContext.destination)
// useNuxtApp().$logger.log('Music device connected')
musicGain.gain.value = 1
},
connectNoiseNode () {
const {
noiseSplitter, microphoneSource, noiseDevice,
outputSplitter, noiseSource
} = this.createdNodes
this.connectMusicDevice()
noiseSource.connect(noiseSplitter, 0, 0)
microphoneSource.connect(noiseDevice, 0, 0)
noiseSplitter.connect(noiseDevice, 0, 1)
noiseSplitter.connect(noiseDevice, 1, 2) // 2 channels
outputSplitter.connect(noiseDevice, 0, 3)
outputSplitter.connect(noiseDevice, 1, 4)
noiseDevice.connect(this.audioContext.destination)
},
connectNodes () {
// Destructure for easier access
const {
musicSource, musicSplitter, microphoneSource,
musicDevice, outputSplitter, musicGain,
noiseSource, noiseSplitter, noiseDevice,
noiseGain
} = this.createdNodes
// Assuming all nodes are created and references to them are correct
// Connect music source to splitter (Stereo to 2 Channels)
musicSource.connect(musicSplitter) // 2 channels: L, R
// Connect microphone to musicDevice input 0 (Mono)
microphoneSource.connect(musicDevice, 0, 0) // 1 channel: Microphone
// Connect musicSplitter to musicDevice inputs 1 and 2 (Stereo)
musicSplitter.connect(musicDevice, 0, 1) // 1 channel: Left
musicSplitter.connect(musicDevice, 1, 2) // 1 channel: Right
// Connect musicDevice to outputSplitter (Stereo out)
musicDevice.connect(outputSplitter) // Assuming musicDevice outputs stereo
// Optionally connect musicDevice to musicGain for additional gain control (Stereo)
musicDevice.connect(musicGain) // Assuming musicDevice outputs stereo, connected to both channels of musicGain
// Connect noise source to noiseSplitter (Stereo to 2 Channels)
noiseSource.connect(noiseSplitter) // 2 channels: L, R
// Connect microphone to noiseDevice input 0 (Mono)
microphoneSource.connect(noiseDevice, 0, 0) // 1 channel: Microphone
// Connect noiseSplitter to noiseDevice inputs 1 and 2 (Stereo)
noiseSplitter.connect(noiseDevice, 0, 1) // 1 channel: Left
noiseSplitter.connect(noiseDevice, 1, 2) // 1 channel: Right
// Connect outputSplitter to noiseDevice inputs 3 and 4 (Stereo from musicDevice)
outputSplitter.connect(noiseDevice, 0, 3) // 1 channel: Left from musicDevice
outputSplitter.connect(noiseDevice, 1, 4) // 1 channel: Right from musicDevice
// Assuming noiseDevice outputs stereo, connect to noiseGain (Stereo)
noiseDevice.connect(noiseGain) // Assuming noiseDevice outputs stereo, connected to both channels of noiseGain
const attenuation = noiseDevice.parametersById.get('attenuation')
attenuation.value = getAttenuationFactor('Yes', 'OverEar')
noiseGain.connect(this.audioContext.destination)
musicGain.connect(this.audioContext.destination)
},
handlePlayingUpdate (isPlaying) {
const ContextClass = (window.AudioContext || window.webkitAudioContext || window.mozAudioContext || window.oAudioContext || window.msAudioContext)
let audioContext = this.audioContext ? this.audioContext : new ContextClass()
let microphoneSource = null
if (isPlaying) {
// Web Audio API is available.
// const musicAudioComponent = this.$refs[this.currentElement.title].audioElement
if (this.areAllNodesAvailable()) {
this.disconnectNodes()
// reconnecting because all Nodes are there
this.connectNoiseNode()
this.unmute()
// useNuxtApp().$logger.log('Connected everything because it was there already')
} else {
navigator.mediaDevices.getUserMedia({
audio: {
echoCancellation: false,
noiseSuppression: false,
autoGainControl: false
},
video: false
})
.then((micStream) => {
audioContext = this.audioContext
microphoneSource = audioContext.createMediaStreamSource(micStream)
this.createdNodes.microphoneSource ||= microphoneSource
// Return both audioContext and microphoneSource to the next link in the chain
return { audioContext, microphoneSource }
})
.then(({ audioContext, microphoneSource }) => {
// First RNBO device creation
return createRNBODevice(audioContext, this.noisePatch).then((noiseRNBODevice) => {
// useNuxtApp().$logger.log({ noiseRNBODevice }, 'Noise RNBODEVICE created: with ' + noiseRNBODevice.node.numberOfInputs + ' inputs and ' + noiseRNBODevice.node.numberOfOutputs + ' outputs')
this.createdNodes.noiseRNBODevice ||= noiseRNBODevice
// Return al.then(() => {hen(() => necessary objects for the next step
return { audioContext, microphoneSource, noiseRNBODevice }
})
})
.then(({ audioContext, microphoneSource, noiseRNBODevice }) => {
// Second RNBO device creation
return createRNBODevice(audioContext, this.musicPatch).then((musicRNBODevice) => {
// useNuxtApp().$logger.log({ noiseRNBODevice }, 'Music RNBODEVICE created: with ' + musicRNBODevice.node.numberOfInputs + ' inputs and ' + musicRNBODevice.node.numberOfOutputs + ' outputs')
// Return all necessary objects for the final setup
this.createdNodes.musicRNBODevice ||= musicRNBODevice
return { audioContext, microphoneSource, noiseRNBODevice, musicRNBODevice }
})
}).then(({ audioContext, microphoneSource, noiseRNBODevice, musicRNBODevice }) => {
// Ensure this.createdNodes is initialized
this.createdNodes ||= {}
const noiseAudioElement = this.$refs.Noise.$refs.audioElement
const musicAudioElement = this.$refs[this.currentElement.title].$refs.audioElement
// Assign nodes if they don't exist
this.createdNodes.microphoneSource ||= microphoneSource
this.createdNodes.musicDevice ||= musicRNBODevice.node
this.createdNodes.noiseDevice ||= noiseRNBODevice.node
this.createdNodes.noiseSource ||= audioContext.createMediaElementSource(noiseAudioElement)
this.createdNodes.musicSource ||= audioContext.createMediaElementSource(musicAudioElement)
this.createdNodes.musicSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.noiseSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.outputSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.musicGain ||= audioContext.createGain()
this.createdNodes.noiseGain ||= audioContext.createGain()
this.createdNodes.merger ||= audioContext.createChannelMerger(4)
this.createdNodes.musicGain.gain.value = 0.901
this.createdNodes.noiseGain.gain.value = 0.901 // macht nichts
musicAudioElement.muted = false
noiseAudioElement.muted = false
// this.connectNodes()
// this.connectTestNode()
})
.catch((_error) => {
this.disconnectNodes()
if (this.audioContext) {
this.audioContext.destination.disconnect()
// audioContext.destination = null
}
})
}
} else {
this.disconnectNodes()
}
},
disconnectMusicNodes () {
const { musicDevice, musicSplitter, musicSource, outputSplitter, musicGain } = this.createdNodes
// useNuxtApp().$logger.log('discconnect Music device')
musicGain.gain.linearRampToValueAtTime(0, this.audioContext.currentTime + 2)
setTimeout(() => {
musicSource.disconnect()
// microphoneSource.disconnect()
musicSplitter.disconnect()
musicDevice.disconnect()
outputSplitter.disconnect()
// useNuxtApp().$logger.log('Music device disconnected')
musicSource.disconnect()
}, 2100)
},
fadeInToNewSoundscape (oldNode, newNode, audioContext, duration = 1) {
// Create two gain nodes, one for each audio source
const oldGain = this.createdNodes.musicGain
const newGain = audioContext.createGain()
// Initially, the old node is at full volume, and the new one is silent
oldGain.gain.setValueAtTime(oldGain.value, audioContext.currentTime)
newGain.gain.setValueAtTime(0, audioContext.currentTime)
// Connect the nodes
oldNode.connect(oldGain).connect(audioContext.destination)
newNode.connect(newGain).connect(audioContext.destination)
// Schedule the fade out of the old node
oldGain.gain.linearRampToValueAtTime(0, audioContext.currentTime + duration)
// Schedule the fade in of the new node
newGain.gain.linearRampToValueAtTime(1, audioContext.currentTime + duration)
// Optionally, stop the old node after the fade out
oldNode.mediaElement.addEventListener('ended', () => {
oldNode.disconnect()
})
// Start playing the new node if it's not already playing
if (newNode.mediaElement.paused) {
newNode.mediaElement.play().catch(_error => 'Playback error:', error)
}
},
disconnectNodes () {
// useNuxtApp().$logger.log('DISCONNECT ALL NODES')
// Ensure this.createdNodes is defined and is an object
if (typeof this.createdNodes === 'object' && this.createdNodes !== null) {
Object.values(this.createdNodes).forEach((node) => {
// Check if the node exists and has a disconnect method
if (node && typeof node.disconnect === 'function') {
node.disconnect()
}
// Additional handling for specific types of nodes, if necessary
// For example, stopping an OscillatorNode
if (node instanceof OscillatorNode) {
node.stop()
}
})
}
}
}
}
</script>
<style scoped>
.rnboplayer {
position: fixed;
width: 220px; /* Or specify a fixed width like 220px if you prefer */
max-width: 220px; /* This line might be redundant depending on your width strategy */
height: fit-content;
display: inline-grid;
z-index: 2;
bottom: 11%;
left: 0;
right: 0;
margin-left: auto;
margin-right: auto;
margin-bottom: 1em;
}
.play.yellow {
background: rgba(255, 176, 66, 0.008);
border-radius: 50%;
box-shadow: 0 0 0 0 rgba(255, 177, 66, 1);
animation: pulse-yellow 4s infinite;
position: fixed;
bottom: 40%;
width: 200px;
height: 200px;
background-image: url('/images/playbtn.svg');
background-repeat:no-repeat;
background-attachment:fixed;
background-position: 58% 55%;
}
.pause.yellow {
background: rgba(255, 176, 66, 0.008);
border-radius: 50%;
box-shadow: 0 0 0 0 rgba(255, 177, 66, 1);
opacity: 0.05;
position: fixed;
bottom: 40%;
width: 200px;
height: 200px;
background-image: url('/images/pausebtn.svg');
background-size: 130px 100px;
background-repeat:no-repeat;
background-attachment:fixed;
background-position: center;
}
.pause.yellow:hover{
opacity: 0.5;
}
@keyframes pulse-yellow {
0% {
transform: scale(0.95);
box-shadow: 0 0 0 0 rgba(255, 177, 66, 0.7);
}
70% {
transform: scale(1);
box-shadow: 0 0 0 10px rgba(255, 177, 66, 0);
}
100% {
transform: scale(0.95);
box-shadow: 0 0 0 0 rgba(255, 177, 66, 0);
}
}
</style>

View File

@ -1,110 +0,0 @@
<template>
<h1>Clitching in audio</h1>
<h1>Test Version Noise Patch Music: mit WebAudio, ohne Noise-Patch & ohne Music-Patch, aber Musik</h1>
<h2>Currently Artifacts caused by the import of the RNBO Patch </h2>
<AudioElement
ref="Noise"
key="5"
:src="noise_src"
title="Noise"
@update:volume="updateNoiseGain"
>
<template #default="{}">
<div class="icon">
<img style="width: 25px" src="~/assets/image/noiseicon.svg">
</div>
</template>
</AudioElement>
<AudioElement
ref="Music"
key="1"
:src="forest_src"
title="Forest"
@update:volume="updateMusicGain"
@update:playing="handlePlayingUpdate"
>
<template #default="{ }">
<div class="icon">
<!-- tropic icon -->
<img style="width: 25px" src="~/assets/image/musicicon.svg">
</div>
</template>
<!-- Slot content for AudioElement, if needed -->
</AudioElement>
</template>
<script lang="ts">
import { type Device, createDevice } from '@rnbo/js'
import AudioElement from '~/components/experiments/AudioElement.vue'
import { useAudioStore } from '~/stores/audio'
import importedNoisePatcher from '@/assets/patch/noise_patch.export.json'
export default {
name: 'MicNoisePatchMusic',
components: { AudioElement },
data () {
return {
audioContext: useAudioStore().getContext() as AudioContext,
playing: false,
paused: false,
createdNodes: {} as any,
noiseDevice: null as Device | null,
forest_src: window.location.origin + useRuntimeConfig().public.tracks.forest_src as string,
noise_src: window.location.origin + useRuntimeConfig().public.noise_src as string
}
},
beforeMount () {
// useNuxtApp().$logger.log('beforeMount')
this.createNoiseDevice()
},
methods: {
async createNoiseDevice () {
if (!this.noiseDevice) {
const patcher = await importedNoisePatcher as any
//
// the method createDevice is called and right after the audio clichtes start
//
this.noiseDevice = await createDevice({
context: this.audioContext,
patcher
})
// useNuxtApp().$logger.log(patcher)
}
},
async handlePlayingUpdate () {
const noiseElement = await this.$refs.Noise as typeof AudioElement
const noiseAudioElement = noiseElement.$refs.audioElement as HTMLMediaElement
const musicElement = this.$refs.Music as typeof AudioElement
const musicAudioElement = musicElement.$refs.audioElement as HTMLMediaElement
const audioContext = this.audioContext
const destination = this.audioContext.destination
this.createdNodes.musicGain ||= audioContext.createGain()
this.createdNodes.noiseGain ||= audioContext.createGain()
this.createdNodes.noiseSource = audioContext.createMediaElementSource(noiseAudioElement)
this.createdNodes.musicSource = audioContext.createMediaElementSource(musicAudioElement)
this.createdNodes.noiseSource.connect(this.createdNodes.noiseGain)
this.createdNodes.musicSource.connect(this.createdNodes.musicGain)
this.createdNodes.noiseGain.connect(destination)
this.createdNodes.musicGain.connect(destination)
await this.createNoiseDevice()
musicAudioElement.muted = false
noiseAudioElement.muted = false
},
updateNoiseGain (volume: number) {
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(volume, this.createdNodes.noiseGain.context.currentTime + 0.30)
},
updateMusicGain (volume: number) {
this.createdNodes.musicGain.gain.linearRampToValueAtTime(volume, this.createdNodes.musicGain.context.currentTime + 0.30)
}
}
}
</script>

View File

@ -1,192 +0,0 @@
<template>
<h1>Test Version NoiseMusicGain: mit WebAudio & Gain und PlayPause, Laden von Devices</h1>
<h2>Play State: {{ playing }} </h2>
<p>
The method refreshAudioContext helps to get the ressources free when we stop playing the audio
// without it would be louder each time we start playing
</p>
<p>
Whenever I view this page the audio starts playing, when I hit 'space' it fades out within 2seconds
when i start playing again nothing happens... I would expect playing.
</p>
<div v-if="createdNodes.musicGain">
{{ createdNodes.musicGain.gain.value }}
</div>
<div v-if="createdNodes.noiseGain">
{{ createdNodes.noiseGain.gain.value }}
</div>
<AudioElement
ref="Microphone"
key="6"
src="/sounds/debug/LMusik_RSprache.mp3"
title="Microphone"
@update:volume="updateNoiseGain"
>
<template #default="{}">
<div class="icon">
<img style="width: 25px" src="~/assets/image/headphone.png">
</div>
</template>
</AudioElement>
<AudioElement
ref="Noise"
key="5"
:src="noise_src"
title="Noise"
@update:volume="updateNoiseGain"
@update:loaded="noiseReady=true"
>
<template #default="{}">
<div class="icon">
<img style="width: 25px" src="~/assets/image/noiseicon.svg">
</div>
</template>
</AudioElement>
<AudioElement
ref="Music"
key="1"
:src="forest_src"
title="Forest"
@update:volume="updateMusicGain"
@update:playing="handlePlayingUpdate"
@update:fadeout="fadeOutGains"
@update:loaded="musicReady=true"
>
<template #default="{ }">
<div class="icon">
<!-- tropic icon -->
<img style="width: 25px" src="~/assets/image/musicicon.svg">
</div>
</template>
<!-- Slot content for AudioElement, if needed -->
</AudioElement>
</template>
<script lang="ts">
import { type Device } from '@rnbo/js'
import AudioElement from '~/components/experiments/AudioElement.vue'
import { useAudioStore } from '~/stores/audio'
import importedNoisePatcher from '@/assets/patches/1.3.1_versions/singleBand/adaptive_masking_controller_NoMusic.rnbopat.export.json'
import importedMusicPatcher from '@/assets/patches/1.3.1_versions/nomusicPatch/export/js/ASM_Vers_4in2out_48kHz_NoMusik.rnbopat.export.json'
import { createRNBODevice } from '@/lib/AudioFunctions'
import setupNodes from '~/components/Player/Nodes'
export default {
name: 'NoiseMusicGain',
components: { AudioElement },
data () {
return {
audioContext: useAudioStore().getContext(),
playing: useAudioStore().playing,
paused: false,
createdNodes: {} as any,
fading: false,
noiseReady: false,
musicReady: false,
musicDevice: {} as Device,
noiseDevice: {} as Device,
forest_src: window.location.origin + useRuntimeConfig().public.tracks.forest_src as string,
noise_src: window.location.origin + useRuntimeConfig().public.noise_src as string
}
},
async beforeMount () {
this.noiseDevice = await createRNBODevice(this.audioContext, importedNoisePatcher)
this.musicDevice = await createRNBODevice(this.audioContext, importedMusicPatcher)
// useNuxtApp().$logger.log('beforeMount: created RNBO devices ', this.musicDevice, this.noiseDevice)
},
mounted () {
},
methods: {
// This method helps to get the ressources free when we stop playing the audio
// without it would be louder each time we start playing
refreshAudioContext () {
const newAudioContext = new AudioContext()
this.audioContext.close()
useAudioStore().audioContext = newAudioContext
this.audioContext = useAudioStore().getContext()
},
fadeOutGains () {
const fadeTime = this.audioContext.currentTime + 1.3
this.fading = true
if (this.createdNodes.noiseGain) {
this.createdNodes.noiseGain.gain.cancelScheduledValues(this.audioContext.currentTime)
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(0, this.audioContext.currentTime + 3)
}
if (this.createdNodes.musicGain) {
this.createdNodes.musicGain.gain.cancelScheduledValues(this.audioContext.currentTime)
this.createdNodes.musicGain.gain.linearRampToValueAtTime(0, this.audioContext.currentTime + 3)
}
setTimeout(() => {
this.fading = false
}, fadeTime * 1000)
},
fadeInGains () {
const fadeTime = this.audioContext.currentTime + 6
this.fading = true
const noiseGain = this.createdNodes.noiseGain
const musicGain = this.createdNodes.musicGain
noiseGain.gain.linearRampToValueAtTime(1.0, fadeTime)
musicGain.gain.linearRampToValueAtTime(1.0, fadeTime)
setTimeout(() => {
this.fading = false
}, fadeTime * 1000)
},
async handlePlayingUpdate (state: boolean) {
if (!this.musicReady && !this.noiseReady) {
// useNuxtApp().$logger.log('not yet ready' + this.musicReady + ' ready noise' + this.noiseReady)
return
}
if (state && useAudioStore().isPlaying()) {
// Music has just started react on it.
const noiseElement = this.$refs.Noise as typeof AudioElement
const noiseAudioElement = noiseElement.$refs.audioElement as HTMLMediaElement
const musicElement = this.$refs.Music as typeof AudioElement
const musicAudioElement = musicElement.$refs.audioElement as HTMLMediaElement
const audioContext = this.audioContext
this.createdNodes.musicGain ||= audioContext.createGain()
this.createdNodes.noiseGain ||= audioContext.createGain()
this.createdNodes.musicGain.gain.setValueAtTime(0, audioContext.currentTime)
this.createdNodes.noiseGain.gain.setValueAtTime(0, audioContext.currentTime)
musicAudioElement.muted = false
noiseAudioElement.muted = false
this.createdNodes.noiseSource = audioContext.createMediaElementSource(noiseAudioElement)
this.createdNodes.musicSource = audioContext.createMediaElementSource(musicAudioElement)
const gains = await setupNodes(this.createdNodes.noiseSource, this.createdNodes.musicSource)
this.createdNodes.musicGain = gains.pop() // Order is important frost music then noiseGains
this.createdNodes.noiseGain = gains.pop()
// useNuxtApp().$logger.log('GAINVALUES M:' + this.createdNodes.musicGain.gain.value, ' N:' + this.createdNodes.noiseGain.gain.value)
this.fadeInGains()
} else {
if (this.fading) { return }
// Music has just stopped react on it.
this.playing = false
setTimeout(() => {
this.createdNodes = []
this.refreshAudioContext()
}, 1500)
}
},
updateNoiseGain (volume: number) {
if (this.createdNodes.noiseGain) {
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(volume, this.createdNodes.noiseGain.context.currentTime + 0.30)
}
},
updateMusicGain (volume: number) {
if (this.createdNodes.musicGain) {
this.createdNodes.musicGain.gain.linearRampToValueAtTime(volume, this.createdNodes.musicGain.context.currentTime + 0.30)
}
}
}
}
</script>

View File

@ -1,97 +0,0 @@
<template>
<h1>Clitching in audio</h1>
<h1>Test Version Mic NoisePatch Music: mit WebAudio, mit Noise-Patch & ohne Music-Patch, aber Musik</h1>
<h2>Currently Artifacts caused by the import of the RNBO Patch </h2>
<AudioElement
ref="Noise"
key="5"
:src="noise_src"
title="Noise"
@update:volume="updateNoiseGain"
>
<template #default="{}">
<div class="icon">
<img style="width: 25px" src="~/assets/image/noiseicon.svg">
</div>
</template>
</AudioElement>
<AudioElement
ref="Music"
key="1"
:src="forest_src"
title="Forest"
@update:volume="updateMusicGain"
@update:playing="handlePlayingUpdate"
>
<template #default="{ }">
<div class="icon">
<!-- tropic icon -->
<img style="width: 25px" src="~/assets/image/musicicon.svg">
</div>
</template>
<!-- Slot content for AudioElement, if needed -->
</AudioElement>
</template>
<script lang="ts">
import { createDevice } from '@rnbo/js'
import AudioElement from '~/components/experiments/AudioElement.vue'
import { useAudioStore } from '~/stores/audio'
import importedNoisePatcher from '@/assets/patch/test_patch.export.json'
export default {
name: 'MicNoisePatchMusic',
components: { AudioElement },
data () {
return {
audioContext: useAudioStore().getContext(),
playing: false,
paused: false,
createdNodes: {} as any,
noiseDevice: null,
noise_src: window.location.origin + useRuntimeConfig().public.noise_src,
forest_src: window.location.origin + useRuntimeConfig().public.tracks.forest_src
}
},
beforeMount () {
const patcher = importedNoisePatcher as any
return createDevice({
context: useAudioStore().getContext(),
patcher
}).then((_device) => {
// useNuxtApp().$logger.log('RNBO device created:', device)
})
},
methods: {
handlePlayingUpdate () {
const noiseElement = this.$refs.Noise as typeof AudioElement
const noiseAudioElement = noiseElement.$refs.audioElement as HTMLMediaElement
const musicElement = this.$refs.Music as typeof AudioElement
const musicAudioElement = musicElement.$refs.audioElement as HTMLMediaElement
const audioContext = this.audioContext
const destination = this.audioContext.destination
this.createdNodes.musicGain ||= audioContext.createGain()
this.createdNodes.noiseGain ||= audioContext.createGain()
this.createdNodes.noiseSource = audioContext.createMediaElementSource(noiseAudioElement)
this.createdNodes.musicSource = audioContext.createMediaElementSource(musicAudioElement)
this.createdNodes.noiseSource.connect(this.createdNodes.noiseGain)
this.createdNodes.musicSource.connect(this.createdNodes.musicGain)
this.createdNodes.noiseGain.connect(destination)
this.createdNodes.musicGain.connect(destination)
musicAudioElement.muted = false
noiseAudioElement.muted = false
},
updateNoiseGain (volume: number) {
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(volume, this.createdNodes.noiseGain.context.currentTime + 0.30)
},
updateMusicGain (volume: number) {
this.createdNodes.musicGain.gain.linearRampToValueAtTime(volume, this.createdNodes.musicGain.context.currentTime + 0.30)
}
}
}
</script>

View File

@ -1,676 +0,0 @@
<template>
<div
class="container"
style="display: flex; flex-wrap: wrap; gap:20px"
>
<h1>The component is currently not working regarding to the change of buffersource nodes to audioelement source nodes</h1>
<div v-if="playing == false" class="play yellow" @click.prevent="startPlay">
<audio id="hiddenAudio" controls style="display:none;" />
</div>
<div v-if="playing == true && isStartPlayRunning == false" class="pause yellow" @click.prevent="stopPlay">
<audio id="hiddenAudio" controls style="display:none;" />
</div>
<div v-if="isStartPlayRunning" class="spinner-border spinner-border-sm yellow" role="status">
<span class="sr-only">"Loading..."</span>
</div>
<div class="row">
<div class="slider">
<div class="icon">
<!-- tropic icon -->
<img style="width: 25px" src="~/assets/image/noiseicon.svg">
</div>
<input
id="gain-control"
v-model="outputNoiseGain"
type="range"
min="0"
max="100"
step="1"
@wheel="changeNoiseGain"
>
</div>
</div>
<div class="row">
<div class="slider">
<div class="icon">
<!-- tropic icon -->
<img style="width: 25px" src="~/assets/image/musicicon.svg">
</div>
<input
id="gain-control"
v-model="outputMusicGain"
type="range"
min="0"
max="100"
step="1"
@wheel="changeMusicGain"
>
</div>
</div>
</div>
</template>
<script lang="ts">
import { type Device } from '@rnbo/js'
import { mapActions, mapState } from 'pinia'
import { useAudioStore } from '@/stores/interfaces/oldstores/audio_old'
import { ANC } from '@/stores/interfaces/ANC'
import { HeadsetType } from '@/stores/interfaces/HeadsetType'
import {
createRNBODevice
} from '@/lib/AudioFunctions'
import { useMicStore } from '@/stores/microphone'
import importedNoisePatcher from '@/assets/patch/noise_patch.export.json'
import importedMusicPatcher from '@/assets/patch/music_patch.export.json'
import { useUserStore } from '~/stores/user'
enum AttenuationFactor{
OverEarANC = 0.0562,
OverEar = 0.5623,
InEar = 0.1778,
InEarANC = 0.0316
}
interface ComponentData {
audioContext: AudioContext | null,
// Howler
html5AudioPoolSize: number,
// sources:
micSource: MediaStreamAudioSourceNode | null,
noiseAudioSource: AudioBufferSourceNode | null,
musicAudioSource: AudioBufferSourceNode | null,
music: Howl | null,
// RNBO devices:
noiseDevice: Device | null,
musicDevice: Device | null,
// modules:
noiseGainNode: GainNode | null,
musicGainNode: GainNode | null,
// user input:
outputNoiseGain: number | 0,
outputMusicGain: number | 0,
// music splitter node
musicInputChannelSplitter :ChannelSplitterNode | null,
noiseInputChannelSplitter: ChannelSplitterNode | null,
// keep for later:
micStream: MediaStream | null,
// fading
isFading: boolean | false,
// DEBUG:
eventLog: string,
// Button handling to avoid spamming button as long as the audio preossing is busy
isStopPlayRunning: boolean,
isStartPlayRunning: boolean,
// component state:
// isPlaying: boolean <- replaced with the stored state
startTime: number
pauseTime: number
startOffset: number
hoveringNoiseSlider: false,
hoveringMusicSlider: false
}
export default {
name: 'RNBOPlayer',
props: {
scenery: {
type: String,
default: 'forest'
}
},
data (): ComponentData {
return {
audioContext: null,
micSource: null,
// Howler.js
html5AudioPoolSize: 0,
music: null,
noiseAudioSource: null,
musicAudioSource: null,
// RNBO devices:
noiseDevice: null,
musicDevice: null,
// modules:
noiseGainNode: null,
musicGainNode: null,
// user input:
outputNoiseGain: 0.0,
outputMusicGain: 0.0,
// music splitter node
// Music input channel splitter:
musicInputChannelSplitter: null,
noiseInputChannelSplitter: null,
// keep for later:
micStream: null,
// fading
isFading: false,
// component state:
// isPlaying: false, replaced with the audio store state
startTime: 0,
pauseTime: 0,
startOffset: 0,
// Handle state of buttons
isStopPlayRunning: false,
isStartPlayRunning: false,
eventLog: '',
// audio context playtime
hoveringNoiseSlider: false,
hoveringMusicSlider: false
}
},
computed: {
...mapState(useAudioStore, ['playing']),
...mapState(useUserStore, ['user']),
normalizedNoiseGain: function () {
return this.outputNoiseGain / 100.0
},
normalizedMusicGain: function () {
return this.outputMusicGain / 100.0
}
},
watch: {
scenery () {
// useNuxtApp().$logger.log('Scenery changed to: ' + this.scenery)
},
/**
* This methods watches changes on the music gain slider and use a simple ramp for adaption
* @param newValue
*/
async outputMusicGain (newValue) {
// useNuxtApp().$logger.log('Value Music Changed: new Volume is ' + newValue)
this.outputMusicGain = newValue
const musicGainNode = this.musicGainNode
const audioCtx = await useAudioStore().getAudioContext()
if (musicGainNode && !this.isFading) { musicGainNode.gain.linearRampToValueAtTime(this.normalizedMusicGain, audioCtx.currentTime + 0.5) }
},
/**
* This methods watches changes on the noise gain slider and use a simple ramp for adaption
* @param newValue
*/
async outputNoiseGain (newValue) {
// useNuxtApp().$logger.log('Value Noise Changed: new Volume is ' + newValue)
this.outputNoiseGain = newValue
const noiseGainNode = this.noiseGainNode
const audioCtx = await useAudioStore().getAudioContext()
if (noiseGainNode && !this.isFading) { noiseGainNode.gain.linearRampToValueAtTime(this.normalizedNoiseGain, audioCtx.currentTime + 0.5) }
}
},
beforeUnmount (): void {
this.slowlyFadeOutAudio(1000)
useAudioStore().resetAudioContext()
},
mounted () {
const audioStore = useAudioStore()
const micStore = useMicStore()
// Change global volume.
Howler.volume(1)
// bufferStore.preLoadAudioFiles()
micStore.getMicrophone()
this.startPlay()
this.audioContext = audioStore.getAudioContext()
const hiddenAudio = document.getElementById('hiddenAudio')
if (hiddenAudio instanceof HTMLAudioElement) {
const sink = useUserStore().audioOutputDevice as any
hiddenAudio.setSinkId(sink.deviceId)
}
hiddenAudio?.addEventListener('play', (_e) => {
// useNuxtApp().$logger.log('Eventhandler play of hiddenAudio: ' + e)
if (!this.isStartPlayRunning || !useAudioStore().playing) {
this.startPlay()
}
})
hiddenAudio?.addEventListener('pause', (_e) => {
// useNuxtApp().$logger.log('Eventhandler pause of hiddenAudio: ' + e)
this.stopPlay()
})
if ('mediaSession' in navigator) {
this.showMediaInformation()
// Play action
navigator.mediaSession.setActionHandler('play', (_e) => {
// Your play action here
// useNuxtApp().$logger.log('Eventhandler play of navigator: ' + e)
if (!this.isStartPlayRunning === false || !useAudioStore().playing) { this.startPlay() }
})
// Pause action
navigator.mediaSession.setActionHandler('pause', (_e) => {
this.stopPlay()
})
/* CURRENTLY NOT IMPLEMENTED MORE A FEATURE
// Previous track action
navigator.mediaSession.setActionHandler('previoustrack', () => {
const localePath = useLocalePath()
// Your previous track action here
if (this.scenery === 'Tropics') { this.scenery }
localePath('Home' + this.scenery)
// useNuxtApp().$logger.log('Previous track button pressed')
})
// Next track action
navigator.mediaSession.setActionHandler('nexttrack', () => {
// Your next track action here
// useNuxtApp().$logger.log('Next track button pressed')
})
*/
}
},
methods: {
...mapActions(useAudioStore, ['addNode', 'startAllSources', 'startNode']),
/**
* This methods the audio playback
*/
showMediaInformation () {
if ('mediaSession' in navigator) {
navigator.mediaSession.metadata = new MediaMetadata({
title: this.scenery,
artist: 'Mindboost',
artwork: [
{ src: '/images/scenery/' + this.scenery + '.jpg', sizes: '96x96', type: 'image/jpeg' }
]
})
}
},
/**
* This method is called to stop playing music and noise. As audiobuffernodes can only be started once,
* the audiocontext is used to create new buffernodes.
* @param audioCtx
*/
changeNoiseGain (event:WheelEvent): void {
event.preventDefault()
// Determine the direction of the scroll
const delta = Math.sign(event.deltaY)
if (this.outputNoiseGain - delta < 101 && this.outputNoiseGain - delta > -1) {
this.outputNoiseGain -= delta
}
},
changeMusicGain (event:WheelEvent): void {
event.preventDefault()
// Determine the direction of the scroll
const delta = Math.sign(event.deltaY)
if (this.outputMusicGain - delta < 101 && this.outputMusicGain - delta > -1) {
this.outputMusicGain -= delta
}
},
stopPlay () {
if (this.isStopPlayRunning) {
// useNuxtApp().$logger.log('StopPlay is already running. Please wait.')
return
}
this.isStopPlayRunning = true
if (useAudioStore().isPlaying) {
// useNuxtApp().$logger.log('Its play is. Please wait, we stop it')
this.slowlyFadeOutAudio(1000)
// this.music?.stop()
}
useMicStore().detachMicrophone()
this.isStopPlayRunning = false
},
slowlyFadeOutAudio (duration = 5000) {
this.isFading = true
let currentValue = 100
const targetValue = 0
const increment = targetValue / (duration / 100) // Calculate how much to increment every 10ms
this.musicGainNode?.gain.linearRampToValueAtTime(0, duration / 1000) // Convert duration to seconds
this.noiseGainNode?.gain.linearRampToValueAtTime(0, duration / 1000)
const intervalId = setInterval(() => {
currentValue += increment
if (currentValue >= targetValue) {
currentValue = targetValue // Ensure it doesn't go over 100
clearInterval(intervalId) // Stop the interval
}
this.outputNoiseGain = currentValue
this.outputMusicGain = currentValue
if (this.musicAudioSource instanceof MediaElementAudioSourceNode) {
this.musicAudioSource.disconnect()
this.musicAudioSource = null
}
if (this.noiseAudioSource instanceof MediaElementAudioSourceNode) {
this.noiseAudioSource.disconnect()
this.noiseAudioSource = null
}
useAudioStore().setPlaying(false)
// useNuxtApp().$logger.log(currentValue)
}, 50) // Update every 50 milliseconds (0.05 second)
this.isFading = false
},
/**
* raises slowly the sliders to trigger the watcher that fades in the audio
* @param duration
*/
slowlyFadeInAudio (duration = 5000) {
this.isFading = true
let currentValue = 1
const targetValue = 100
const increment = targetValue / (duration / 100) // Calculate how much to increment every 10ms
const intervalId = setInterval(() => {
currentValue += increment
if (currentValue >= targetValue) {
currentValue = targetValue // Ensure it doesn't go over 100
clearInterval(intervalId) // Stop the interval
}
this.outputNoiseGain = currentValue
this.outputMusicGain = currentValue
// useNuxtApp().$logger.log(currentValue)
}, 30) // Update every 50 milliseconds (0.05 second)
this.isFading = false
},
testHowler () {
},
async startPlay () {
const audioStore = useAudioStore()
const microStore = useMicStore()
// const bufferStore = useBufferStore()
//
// State management of playing audio
//
if (this.isStartPlayRunning) { return } else { this.isStartPlayRunning = true }
// get the current context
const audioContext = await useAudioStore().getAudioContext()
// get inputs
const mic = await microStore.getMicrophone()
const microphone = useAudioStore().getMicrophoneNode()
this.micSource = mic.microphoneNode as MediaStreamAudioSourceNode
this.micStream = mic.microphoneStream // needed later to stop audio input
// get audio files:
/*
this.noiseAudioSource = await audioStore.getBufferSourceNode('noise')
this.musicAudioSource = await audioStore.getBufferSourceNode(this.scenery)
*/
const noiseAudioSource = await audioStore.getBufferSourceNode('noise')
const musicAudioSource = await audioStore.getBufferSourceNode(this.scenery)
// const musicAudioSource:AudioNodeItem = await audioStore.getBufferSourceNode(this.scenery)
// RNBO devices:
this.musicDevice = await createRNBODevice(audioContext, importedMusicPatcher)
this.noiseDevice = await createRNBODevice(audioContext, importedNoisePatcher)
const musicDevice = audioStore.addNode('musicDevice', this.musicDevice.node)
const noiseDevice = audioStore.addNode('noiseDevice', this.noiseDevice.node)
// Music input channel splitter:
this.musicInputChannelSplitter = new ChannelSplitterNode(
audioContext,
{ numberOfOutputs: 2 }
)
const musicInputChannelSplitter = audioStore.addNode('musicInputChannelSplitter', this.musicInputChannelSplitter)
this.noiseInputChannelSplitter = new ChannelSplitterNode(
audioContext,
{ numberOfOutputs: 2 }
)
const noiseInputChannelSplitter = audioStore.addNode('noiseInputChannelSplitter', this.noiseInputChannelSplitter)
const musicOutputChannelSplitterNode : ChannelSplitterNode = new ChannelSplitterNode(
audioContext,
{ numberOfOutputs: 2 }
)
const musicOutputChannelSplitter = audioStore.addNode('musicOutputChannelSplitter', musicOutputChannelSplitterNode)
/**
*
* GAIN NODES
*/
const musicGainNode : GainNode = audioContext.createGain()
const noiseGainNode : GainNode = audioContext.createGain()
musicGainNode.gain.value = 0.5
noiseGainNode.gain.value = 0.5
this.noiseGainNode = noiseGainNode
this.musicGainNode = musicGainNode
const musicGain = audioStore.addNode('musicGainNode', musicGainNode)
const noiseGain = audioStore.addNode('noiseGainNode', noiseGainNode)
/**
* MUSIC PATCH
*/
try {
if (microphone && musicDevice) { audioStore.connectNodes(microphone, musicDevice, 0, 0) }
if (musicAudioSource && musicInputChannelSplitter) { audioStore.connectNodes(musicAudioSource, musicInputChannelSplitter, 0, 0) } // 2
if (musicDevice && musicInputChannelSplitter) { audioStore.connectNodes(musicInputChannelSplitter, musicDevice, 0, 1) } // 2
if (musicDevice && musicInputChannelSplitter) { audioStore.connectNodes(musicInputChannelSplitter, musicDevice, 1, 2) } // 1
} catch (error) {
// const logger = useNuxtApp().$logger as Logger
// logger.info('Initializing audio context...')
}
// Gains of Music PATCH
const destination = await audioStore.addNode('destination', audioContext.destination)
audioStore.connectNodes(musicDevice, musicGain) // 2
audioStore.connectNodes(musicGain, destination) // 2
/**
* NOISE PATCH
*/
audioStore.connectNodes(microphone, noiseDevice, 0, 0)
audioStore.connectNodes(noiseAudioSource, noiseInputChannelSplitter, 0, 0) // 2
audioStore.connectNodes(noiseInputChannelSplitter, noiseDevice, 0, 1) // 1
audioStore.connectNodes(noiseInputChannelSplitter, noiseDevice, 1, 2) // 1
// cross connect music to audio devices:
audioStore.connectNodes(musicDevice, musicOutputChannelSplitter, 0, 0) // 2
audioStore.connectNodes(musicOutputChannelSplitter, noiseDevice, 0, 3) // 1
audioStore.connectNodes(musicOutputChannelSplitter, noiseDevice, 1, 4) // 1
// this.noiseDevice?.node.connect(audioContext.destination, 0, 0) // 2
audioStore.connectNodes(noiseDevice, noiseGain, 0, 0) // 2
/**
* Gain of the noise
*/
audioStore.connectNodes(noiseGain, destination)
// noiseGainNode.connect(audioContext.destination)
/**
* START ALL SOURCES
*/
// audioStore.startAllSources()
// useNuxtApp().$logger.error('Audio Source is not defined and cannot be started')
this.startTime = audioContext.currentTime
this.pauseTime = 0
/**
* Parameters for rnbo device
*/
const attenuationFactor = this.noiseDevice.parametersById.get('attenuation')
attenuationFactor.value = getAttenuationFactor('Yes' as ANC, 'OverEar' as HeadsetType)
this.noiseDevice.messageEvent.subscribe((e) => {
if (e.tag === 'out7') {
this.eventLog = e.payload + '\n'
}
})
this.noiseDevice.messageEvent.subscribe((e) => {
if (e.tag === 'out7') {
this.eventLog = e.payload + '\n'
}
})
/**
* DEBUG
*/
// call the fade in in 3s
this.slowlyFadeInAudio(3000)
this.isStartPlayRunning = false
}
}
}
function getAttenuationFactor (anc:ANC, ht:HeadsetType): Number {
useUserStore()
if (anc === ANC.Yes && ht === HeadsetType.OverEar) { return AttenuationFactor.OverEarANC }
if (anc === ANC.No && ht === HeadsetType.OverEar) { return AttenuationFactor.OverEar }
if (anc === ANC.Yes && ht === HeadsetType.InEar) { return AttenuationFactor.InEarANC }
if (anc === ANC.No && ht === HeadsetType.InEar) { return AttenuationFactor.InEar }
return 0.5623
}
</script>
<style scoped>
.player {
background-color: #fff;
border-radius: 12px;
}
.player button {
border-radius: 10px;
padding: 10px;
}
.container {
display: flex;
flex-wrap: wrap;
gap: 20px; /* Spacing between items */
width: 225px;
margin-bottom: 20px;
}
.icon, .slider {
flex: 1 1 100px; /* Flex-grow, flex-shrink, flex-basis */
display: flex;
align-items: center; /* Center items vertically */
justify-content: center; /* Center items horizontally */
}
.icon {
/* Add padding around the icon for margin */
margin-right: 15px; /* Adjust this value as needed */
/* Align items if using flexbox */
display: flex;
align-items: center;
justify-content: center;
}
.icon img {
/* Adjust width and height as needed or keep them auto to maintain aspect ratio */
width: auto;
height: 100%; /* Example height, adjust based on your icon size */
}
.slider input[type=range] {
width: 100%; /* Full width of its parent */
background-color: transparent !important;
}
@media (min-width: 600px) {
.row {
display: flex;
width: 100%;
}
.icon, .slider {
flex: 1; /* Take up equal space */
}
}
/* Styles the track */
input[type="range"]::-webkit-slider-runnable-track {
background: #e9c046; /* yellow track */
height: 8px;
border-radius: 5px;
}
input[type="range"]::-moz-range-track {
background: #e9c046; /* yellow track */
height: 8px;
border-radius: 5px;
}
input[type="range"]::-ms-track {
background: #e9c046; /* yellow track */
border-color: transparent;
color: transparent;
height: 8px;
}
.play.yellow {
background: rgba(255, 176, 66, 0.008);
border-radius: 50%;
box-shadow: 0 0 0 0 rgba(255, 177, 66, 1);
animation: pulse-yellow 4s infinite;
position: fixed;
bottom: 40%;
width: 200px;
height: 200px;
background-image: url('/images/playbtn.svg');
background-repeat:no-repeat;
background-attachment:fixed;
background-position: 58% 55%;
}
.pause.yellow {
background: rgba(255, 176, 66, 0.008);
border-radius: 50%;
box-shadow: 0 0 0 0 rgba(255, 177, 66, 1);
opacity: 0.05;
position: fixed;
bottom: 40%;
width: 200px;
height: 200px;
background-image: url('/images/pausebtn.svg');
background-size: 130px 100px;
background-repeat:no-repeat;
background-attachment:fixed;
background-position: center;
}
.pause.yellow:hover{
opacity: 0.5;
}
@keyframes pulse-yellow {
0% {
transform: scale(0.95);
box-shadow: 0 0 0 0 rgba(255, 177, 66, 0.7);
}
70% {
transform: scale(1);
box-shadow: 0 0 0 10px rgba(255, 177, 66, 0);
}
100% {
transform: scale(0.95);
box-shadow: 0 0 0 0 rgba(255, 177, 66, 0);
}
}
</style>

View File

@ -1,68 +0,0 @@
<!-- eslint-disable vue/multi-word-component-names -->
<template>
<div>
Do not load multiple test without unloading others.
<div>
NoisePatchMusic
<NoisePatchMusic v-if="tests['noisepatch'].active" />
<button
@click="toggleState('noisepatch')"
>
Load
</button>
<button
v-if="!tests['noisepatch'].active"
@click="toggleState('noisepatch')"
>
Unload
</button>
</div>
<div>
AudioElementManager (all patches connected)
<audioElementManager v-if="tests['complete'].active" />
<button
@click="toggleState('complete')"
>
Load
</button>
<button
v-if="!tests['complete'].active"
@click="toggleState('complete')"
>
Unload
</button>
</div>
<div>
RNBOplayer
<RNBOPlayer v-if="tests['loadrnbo'].active" />
<button
@click="toggleState('loadrnbo')"
>
Load
</button>
<button
v-if="!tests['loadrnbo'].active"
@click="toggleState('loadrnbo')"
>
Unload
</button>
</div>
</div>
</template>
<script setup>
// import Player from '~/components/Player.vue'
import NoisePatchMusic from '~/archive/components/tests/NoisePatchMusic_glitchOnLoad.vue'
import audioElementManager from '~/archive/pages/experiments/audioElementManager.vue'
import RNBOPlayer from '~/archive/components/tests/RNBOPlayer.vue'
const tests = ref({
noisepatch: { active: false },
loadrnbo: { active: false },
complete: { active: false }
})
const toggleState = (stateName) => {
const test = tests.value
test[stateName].active = !test[stateName].active
}
</script>

View File

@ -1,376 +0,0 @@
<template>
<div>
<h2>Press Space to start, wait few seconds then the sound should start. You will hear the arifacts right after the music begins</h2>
<h2>These file contain both patches, use Gainsliders of AudioElement to control volume and allow to skip. No AudioStore used</h2>
<div class="rnboplayer">
<div v-if="selectedAudioTitle">
{{ selectedAudio.title }}
<AudioElement
:ref="selectedAudio.title"
:key="selectedAudio.id"
:src="selectedAudio.src"
:title="selectedAudio.title"
@update:playing="handlePlayingUpdate"
@update:volume="changeMusicVolume"
>
<template #default="{}">
<img style="width: 25px" src="~/assets/image/musicicon.svg">
</template>
</AudioElement>
</div>
<AudioElement
:ref="noise.title"
:key="noise.id"
:src="noise.src"
:title="noise.title"
@update:volume="changeNoiseVolume"
>
<template #default="{}">
<img style="width: 25px" src="~/assets/image/noiseicon.svg">
</template>
<!-- Slot content for AudioElement, if needed -->
</AudioElement>
<button @click="skip">
Skip
</button>
</div>
</div>
</template>
<script>
import AudioElement from '@/components/experiments/AudioElement.vue'
import importedMusicPatcher from '@/assets/patch/music_patch.export.json'
import importedNoisePatcher from '@/assets/patch/noise_patch.export.json'
import { createRNBODevice } from '@/lib/AudioFunctions'
// import setupNodes from '@/components/Player/Nodes'
// import { useAudioStore } from '~/stores/audio'
export default {
components: {
AudioElement
},
data () {
return {
audioList: [
{ id: 1, title: 'Lagoon', src: window.location.origin + '/sounds/lagoon.m4a' },
{ id: 2, title: 'Forest', src: window.location.origin + '/sounds/forest.m4a' },
{ id: 3, title: 'Meadow', src: window.location.origin + '/sounds/meadow.m4a' },
{ id: 4, title: 'Tropics', src: window.location.origin + '/sounds/tropics.m4a' }
],
noise: { id: 5, title: 'Noise', src: window.location.origin + '/masking/noise.flac' },
selectedAudioTitle: '',
currentElement: {},
createdNodes: {},
musicPatch: importedMusicPatcher,
noisePatch: importedNoisePatcher,
audioContext: null,
playState: false
}
},
computed: {
selectedAudio () {
return this.audioList.find(audio => audio.title === this.selectedAudioTitle) || null
},
isPlaying () {
return this.playState
}
},
beforeUnmount () {
this.disconnectNodes()
if (this.audioContext) { this.audioContext.close() }
this.audioContext = null
},
mounted () {
this.audioContext = new AudioContext()
// Example: Select 'Song Three' by default
this.selectAudioByTitle(this.audioList[0].title)
this.addUserNavigationHandling()
this.musicPatch = importedMusicPatcher
this.noisePatch = importedNoisePatcher
},
methods: {
checkAudioContext () {
// Check if there are any keys (node names) in the createdNodes object
if (Object.keys(this.createdNodes).length === 0) {
return false
}
// Check if the 'musicGain' node exists and has a context
if (this.createdNodes.musicGain && this.createdNodes.musicGain.context) {
const audioContextOfNodes = this.createdNodes.musicGain.context
// Compare the contexts
if (this.audioContext === audioContextOfNodes) {
// useNuxtApp().$logger.log('Same context')
return true
} else {
if (audioContextOfNodes.state === 'closed' || audioContextOfNodes.state === 'suspended') {
// useNuxtApp().$logger.log('AudioContext of nodes is closed or suspended')
return false
} else {
this.audioContext = audioContextOfNodes
}
return false
}
} else {
// useNuxtApp().$logger.log('musicGain node does not exist or has no context')
return false
}
},
changeNoiseVolume (volume) {
if (this.createdNodes.noiseGain) {
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(volume, this.createdNodes.noiseGain.context.currentTime + 0.30)
this.noiseGain = this.createdNodes.noiseGain.gain.value
}
},
changeMusicVolume (volume) {
if (this.createdNodes.musicGain) {
this.createdNodes.musicGain.gain.linearRampToValueAtTime(volume, this.createdNodes.musicGain.context.currentTime + 0.30)
this.musicGain = this.createdNodes.musicGain.gain.value
}
},
addUserNavigationHandling () {
if ('mediaSession' in navigator) {
// Set the handler for the next track action
navigator.mediaSession.setActionHandler('nexttrack', () => {
this.skip('next')
})
// Set the handler for the previous track action
navigator.mediaSession.setActionHandler('previoustrack', () => {
this.skip('previous')
})
}
},
getSong (currentTitle, direction) {
const index = this.audioList.findIndex(song => song.title === currentTitle)
let adjacentIndex = index + (direction === 'next' ? 1 : -1)
// Loop back to the first song if 'next' goes beyond the last index
if (adjacentIndex >= this.audioList.length) {
adjacentIndex = 0
} else if (adjacentIndex < 0) {
adjacentIndex = this.audioList.length - 1
}
return this.audioList[adjacentIndex]
},
skip (direction) {
const nextSong = this.getSong(this.selectedAudioTitle, direction)
this.selectAudioByTitle(nextSong.title)
},
updateCurrentElement (title) {
this.currentElement = this.audioList.find(e => e.title === title)
},
selectAudioByTitle (title) {
this.selectedAudioTitle = title
// const audioElement = this.audioList.find(e => e.title === this.selectedAudioTitle)
this.currentElement = this.audioList[this.audioList.findIndex(e => e.title === title)]
// useNuxtApp().$logger.log('currentElement: ', this.selectedAudioTitle)
},
areAllNodesAvailable () {
// List of required node keys as you've named them in this.createdNodes
const requiredNodes = [
'musicSource', 'musicSplitter', 'microphoneSource',
'musicDevice', 'outputSplitter', 'musicGain',
'noiseSource', 'noiseSplitter', 'noiseDevice',
'noiseGain', 'merger'
]
// Check if each required node exists and is not undefined in this.createdNodes
return requiredNodes.every(nodeKey => this.createdNodes[nodeKey] !== undefined)
},
connectNodes () {
// Destructure for easier access
const {
musicSource, musicSplitter, microphoneSource,
musicDevice, outputSplitter, musicGain,
noiseSource, noiseSplitter, noiseDevice,
noiseGain, merger
} = this.createdNodes
// Assuming all nodes are created and references to them are correct
// Connect music source to splitter (Stereo to 2 Channels)
musicSource.connect(musicSplitter) // 2 channels: L, R
// Connect microphone to musicDevice input 0 (Mono)
microphoneSource.connect(musicDevice, 0, 0) // 1 channel: Microphone
// Connect musicSplitter to musicDevice inputs 1 and 2 (Stereo)
musicSplitter.connect(musicDevice, 0, 1) // 1 channel: Left
musicSplitter.connect(musicDevice, 1, 2) // 1 channel: Right
// Connect musicDevice to outputSplitter (Stereo out)
musicDevice.connect(outputSplitter) // Assuming musicDevice outputs stereo
// Optionally connect musicDevice to musicGain for additional gain control (Stereo)
musicDevice.connect(musicGain) // Assuming musicDevice outputs stereo, connected to both channels of musicGain
// Connect noise source to noiseSplitter (Stereo to 2 Channels)
noiseSource.connect(noiseSplitter) // 2 channels: L, R
// Connect microphone to noiseDevice input 0 (Mono)
microphoneSource.connect(noiseDevice, 0, 0) // 1 channel: Microphone
// Connect noiseSplitter to noiseDevice inputs 1 and 2 (Stereo)
noiseSplitter.connect(noiseDevice, 0, 1) // 1 channel: Left
noiseSplitter.connect(noiseDevice, 1, 2) // 1 channel: Right
// Connect outputSplitter to noiseDevice inputs 3 and 4 (Stereo from musicDevice)
outputSplitter.connect(noiseDevice, 0, 3) // 1 channel: Left from musicDevice
outputSplitter.connect(noiseDevice, 1, 4) // 1 channel: Right from musicDevice
// Assuming noiseDevice outputs stereo, connect to noiseGain (Stereo)
noiseDevice.connect(noiseGain) // Assuming noiseDevice outputs stereo, connected to both channels of noiseGain
// Assuming you want to merge and output both processed signals (Stereo from both musicGain and noiseGain)
// Connect noiseGain to the first two inputs of the merger (Stereo)
noiseGain.connect(merger, 0, 0) // 1 channel: Left
noiseGain.connect(merger, 0, 1) // 1 channel: Right
// Connect musicGain to the last two inputs of the merger (Stereo)
musicGain.connect(merger, 0, 2) // 1 channel: Left
musicGain.connect(merger, 0, 3) // 1 channel: Right
// Finally, connect the merger to the audio context's destination (Stereo to output)
merger.connect(merger.context.destination)
},
handlePlayingUpdate (isPlaying) {
const ContextClass = (window.AudioContext || window.webkitAudioContext || window.mozAudioContext || window.oAudioContext || window.msAudioContext)
const audioContext = this.audioContext ? this.audioContext : new ContextClass()
let microphoneSource = null
// useNuxtApp().$logger.log('Handling playing update = ' + isPlaying) // true when playing
if (isPlaying) {
// Web Audio API is available.
// const musicAudioComponent = this.$refs[this.currentElement.title].audioElement
if (this.areAllNodesAvailable() && this.checkAudioContext()) {
this.disconnectNodes()
const musicAudioElement = this.$refs[this.currentElement.title].$refs.audioElement
const musicGainValue = this.createdNodes.musicGain.value
const noiseGainValue = this.createdNodes.noiseGain.value
// Prepare the audio elements (unmute)
this.$refs[this.currentElement.title].$refs.audioElement.muted = false
this.$refs.Noise.$refs.audioElement.muted = false
// replace music node because all Nodes are there
this.createdNodes.musicSource = null
this.createdNodes.musicSource = audioContext.createMediaElementSource(musicAudioElement)
this.connectNodes()
if (musicGainValue > 0 && musicGainValue <= 1.0) {
this.createdNodes.musicGain.gain.linearRampToValueAtTime(musicGainValue || 1.0, audioContext.currentTime + 2)
}
if (noiseGainValue > 0 && noiseGainValue <= 1.0) {
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(noiseGainValue || 1.0, audioContext.currentTime + 3)
}
} else {
navigator.mediaDevices.getUserMedia({
audio: {
echoCancellation: false,
noiseSuppression: false,
autoGainControl: false
},
video: false
})
.then((micStream) => {
microphoneSource = this.audioContext.createMediaStreamSource(micStream)
this.createdNodes.microphoneSource ||= microphoneSource
// Return both audioContext and microphoneSource to the next link in the chain
return { audioContext, microphoneSource }
})
.then(({ audioContext, microphoneSource }) => {
// First RNBO device creation
return createRNBODevice(audioContext, this.noisePatch).then((noiseRNBODevice) => {
this.createdNodes.noiseRNBODevice ||= noiseRNBODevice
// Return al.then(() => {hen(() => necessary objects for the next step
return { audioContext, microphoneSource, noiseRNBODevice }
})
})
.then(({ audioContext, microphoneSource, noiseRNBODevice }) => {
// Second RNBO device creation
return createRNBODevice(audioContext, this.musicPatch).then((musicRNBODevice) => {
// Return all necessary objects for the final setup
this.createdNodes.musicRNBODevice ||= musicRNBODevice
return { audioContext, microphoneSource, noiseRNBODevice, musicRNBODevice }
})
}).then(({ audioContext, microphoneSource, noiseRNBODevice, musicRNBODevice }) => {
const musicElement = this.$refs[this.currentElement.title]
const musicAudioElement = musicElement.$refs.audioElement
// Ensure this.createdNodes is initialized
this.createdNodes ||= {}
// Assign nodes if they don't exist
this.createdNodes.microphoneSource ||= microphoneSource
this.createdNodes.musicDevice ||= musicRNBODevice.node
this.createdNodes.noiseDevice ||= noiseRNBODevice.node
this.createdNodes.noiseSource ||= audioContext.createMediaElementSource(this.$refs.Noise.$refs.audioElement)
this.createdNodes.musicSource ||= audioContext.createMediaElementSource(musicAudioElement)
this.createdNodes.musicSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.noiseSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.outputSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.musicGain ||= audioContext.createGain()
this.createdNodes.noiseGain ||= audioContext.createGain()
this.createdNodes.merger ||= audioContext.createChannelMerger(4)
musicAudioElement.muted = false
this.$refs.Noise.$refs.audioElement.muted = false
this.createdNodes.musicGain.gain.value = 0.0001
this.createdNodes.noiseGain.gain.value = 0.0001 // macht nichts
this.connectNodes()
setTimeout(() => {
this.playState = true
this.createdNodes.musicGain.gain.exponentialRampToValueAtTime(1, audioContext.currentTime + 3)
this.createdNodes.noiseGain.gain.exponentialRampToValueAtTime(1, audioContext.currentTime + 4)
}, 150)
})
.catch((_error) => {
this.disconnectNodes()
if (this.audioContext) {
this.audioContext.destination.disconnect()
// audioContext.destination = null
}
// useNuxtApp().$logger.error('Error setting up audio:', error)
})
}
} else {
this.disconnectNodes()
const newAudioContext = new AudioContext()
this.audioContext.close()
this.audioContext = newAudioContext
}
},
disconnectNodes () {
// Ensure this.createdNodes is defined and is an object
if (typeof this.createdNodes === 'object' && this.createdNodes !== null) {
Object.values(this.createdNodes).forEach((node) => {
// Check if the node exists and has a disconnect method
if (node && typeof node.disconnect === 'function') {
node.disconnect()
}
// Additional handling for specific types of nodes, if necessary
// For example, stopping an OscillatorNode
if (node instanceof OscillatorNode) {
node.stop()
}
})
this.playState = false
}
}
}
}
</script>
<style scoped>
.rnboplayer {
position: fixed;
width: 220px; /* Or specify a fixed width like 220px if you prefer */
max-width: 220px; /* This line might be redundant depending on your width strategy */
height: 100px;
display: inline-grid;
z-index: 2;
bottom: 11%;
left: 0;
right: 0;
margin-left: auto;
margin-right: auto;
}
</style>

View File

@ -1,7 +0,0 @@
<template>
<AudioElementManager />
</template>
<script setup>
import AudioElementManager from '~/archive/components/tests/AudioElementManager.vue'
</script>

View File

@ -1,365 +0,0 @@
<template>
<div>
<h2>FireFox: wait its loading: few seconds then the sound should start. You will hear the arifacts right after the music begins</h2>
<h2>These file contain both patches, use Gainsliders of AudioElement to control volume and allow to skip. AudioStore used</h2>
<div class="rnboplayer">
<div v-if="selectedAudioTitle">
{{ selectedAudio.title }}
<AudioElement
:ref="selectedAudio.title"
:key="selectedAudio.id"
:src="selectedAudio.src"
:title="selectedAudio.title"
@update:playing="handlePlayingUpdate"
@update:volume="changeMusicVolume"
>
<template #default="{}" />
<!-- Slot content for AudioElement, if needed -->
</AudioElement>
</div>
<AudioElement
:ref="noise.title"
:key="noise.id"
:src="noise.src"
:title="noise.title"
@update:volume="changeNoiseVolume"
>
<template #default="{}" />
<!-- Slot content for AudioElement, if needed -->
</AudioElement>
<button @click="skip">
Skip
</button>
</div>
</div>
</template>
<script>
import AudioElement from '@/components/experiments/AudioElement.vue'
import importedMusicPatcher from '@/assets/patch/music_patch.export.json'
import importedNoisePatcher from '@/assets/patch/noise_patch.export.json'
import { createRNBODevice } from '@/lib/AudioFunctions'
// import setupNodes from '@/components/Player/Nodes'
// import { useAudioStore } from '~/stores/audio'
export default {
components: {
AudioElement
},
data () {
return {
audioList: [
{ id: 1, title: 'Lagoon', src: window.location.origin + '/sounds/lagoon.m4a' },
{ id: 2, title: 'Forest', src: window.location.origin + '/sounds/forest.m4a' },
{ id: 3, title: 'Meadow', src: window.location.origin + '/sounds/meadow.m4a' },
{ id: 4, title: 'Tropics', src: window.location.origin + '/sounds/tropics.m4a' }
],
noise: { id: 5, title: 'Noise', src: window.location.origin + '/masking/noise.flac' },
selectedAudioTitle: '',
currentElement: {},
createdNodes: {},
musicPatch: importedMusicPatcher,
noisePatch: importedNoisePatcher,
audioContext: null,
playState: false
}
},
computed: {
selectedAudio () {
return this.audioList.find(audio => audio.title === this.selectedAudioTitle) || null
},
isPlaying () {
return this.playState
},
checkContextCompatibility () {
// Assume createdNodes is an object where each property is a node
const nodes = Object.values(this.createdNodes)
const incompatible = nodes.filter(node => node.context !== this.audioContext)
if (nodes.length === nodes.length - incompatible.length) { return true }
// useNuxtApp().$logger.log(incompatible.length + '/' + nodes.length + ' sind inkompatibel')
this.handleIncompatibeError(incompatible)
return false
}
},
handleIncompatibeError (incompatible) {
if (incompatible) {
const node = incompatible.pop()
if (node.context !== this.createdNodes.musicSplitter.context) {
const audioContext = this.createdNodes.musicSplitter.context
this.createdNodes.musicSource ||= audioContext.createMediaElementSource(musicAudioElement)
}
} else {
// useNuxtApp().$logger.log('no error to solve')
}
},
beforeUnmount () {
this.disconnectNodes()
if (this.audioContext) { this.audioContext.close() }
this.audioContext = null
},
mounted () {
this.audioContext = new AudioContext()
// Example: Select 'Song Three' by default
this.selectAudioByTitle(this.audioList[0].title)
this.addUserNavigationHandling()
this.musicPatch = importedMusicPatcher
this.noisePatch = importedNoisePatcher
},
methods: {
changeNoiseVolume (volume) {
if (this.createdNodes.noiseGain) {
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(volume, this.createdNodes.noiseGain.context.currentTime + 0.30)
this.noiseGain = this.createdNodes.noiseGain.gain.value
}
},
changeMusicVolume (volume) {
if (this.createdNodes.musicGain) {
this.createdNodes.musicGain.gain.linearRampToValueAtTime(volume, this.createdNodes.musicGain.context.currentTime + 0.30)
this.musicGain = this.createdNodes.musicGain.gain.value
}
},
addUserNavigationHandling () {
if ('mediaSession' in navigator) {
// Set the handler for the next track action
navigator.mediaSession.setActionHandler('nexttrack', () => {
this.skip('next')
})
// Set the handler for the previous track action
navigator.mediaSession.setActionHandler('previoustrack', () => {
this.skip('previous')
})
}
},
getSong (currentTitle, direction) {
const index = this.audioList.findIndex(song => song.title === currentTitle)
let adjacentIndex = index + (direction === 'next' ? 1 : -1)
// Loop back to the first song if 'next' goes beyond the last index
if (adjacentIndex >= this.audioList.length) {
adjacentIndex = 0
} else if (adjacentIndex < 0) {
adjacentIndex = this.audioList.length - 1
}
return this.audioList[adjacentIndex]
},
skip (direction) {
const nextSong = this.getSong(this.selectedAudioTitle, direction)
this.selectAudioByTitle(nextSong.title)
},
updateCurrentElement (title) {
this.currentElement = this.audioList.find(e => e.title === title)
},
selectAudioByTitle (title) {
this.selectedAudioTitle = title
// const audioElement = this.audioList.find(e => e.title === this.selectedAudioTitle)
this.currentElement = this.audioList[this.audioList.findIndex(e => e.title === title)]
// useNuxtApp().$logger.log('currentElement: ', this.selectedAudioTitle)
},
areAllNodesAvailable () {
// List of required node keys as you've named them in this.createdNodes
const requiredNodes = [
'musicSource', 'musicSplitter', 'microphoneSource',
'musicDevice', 'outputSplitter', 'musicGain',
'noiseSource', 'noiseSplitter', 'noiseDevice',
'noiseGain', 'merger'
]
// Check if each required node exists and is not undefined in this.createdNodes
return requiredNodes.every(nodeKey => this.createdNodes[nodeKey] !== undefined)
},
connectNodes () {
// Destructure for easier access
const {
musicSource, musicSplitter, microphoneSource,
musicDevice, outputSplitter, musicGain,
noiseSource, noiseSplitter, noiseDevice,
noiseGain, merger
} = this.createdNodes
// Assuming all nodes are created and references to them are correct
// Connect music source to splitter (Stereo to 2 Channels)
musicSource.connect(musicSplitter) // 2 channels: L, R
// Connect microphone to musicDevice input 0 (Mono)
microphoneSource.connect(musicDevice, 0, 0) // 1 channel: Microphone
// Connect musicSplitter to musicDevice inputs 1 and 2 (Stereo)
musicSplitter.connect(musicDevice, 0, 1) // 1 channel: Left
musicSplitter.connect(musicDevice, 1, 2) // 1 channel: Right
// Connect musicDevice to outputSplitter (Stereo out)
musicDevice.connect(outputSplitter) // Assuming musicDevice outputs stereo
// Optionally connect musicDevice to musicGain for additional gain control (Stereo)
musicDevice.connect(musicGain) // Assuming musicDevice outputs stereo, connected to both channels of musicGain
// Connect noise source to noiseSplitter (Stereo to 2 Channels)
noiseSource.connect(noiseSplitter) // 2 channels: L, R
// Connect microphone to noiseDevice input 0 (Mono)
microphoneSource.connect(noiseDevice, 0, 0) // 1 channel: Microphone
// Connect noiseSplitter to noiseDevice inputs 1 and 2 (Stereo)
noiseSplitter.connect(noiseDevice, 0, 1) // 1 channel: Left
noiseSplitter.connect(noiseDevice, 1, 2) // 1 channel: Right
// Connect outputSplitter to noiseDevice inputs 3 and 4 (Stereo from musicDevice)
outputSplitter.connect(noiseDevice, 0, 3) // 1 channel: Left from musicDevice
outputSplitter.connect(noiseDevice, 1, 4) // 1 channel: Right from musicDevice
// Assuming noiseDevice outputs stereo, connect to noiseGain (Stereo)
noiseDevice.connect(noiseGain) // Assuming noiseDevice outputs stereo, connected to both channels of noiseGain
// Assuming you want to merge and output both processed signals (Stereo from both musicGain and noiseGain)
// Connect noiseGain to the first two inputs of the merger (Stereo)
noiseGain.connect(merger, 0, 0) // 1 channel: Left
noiseGain.connect(merger, 0, 1) // 1 channel: Right
// Connect musicGain to the last two inputs of the merger (Stereo)
musicGain.connect(merger, 0, 2) // 1 channel: Left
musicGain.connect(merger, 0, 3) // 1 channel: Right
// Finally, connect the merger to the audio context's destination (Stereo to output)
merger.connect(merger.context.destination)
},
handlePlayingUpdate (isPlaying) {
const ContextClass = (window.AudioContext || window.webkitAudioContext || window.mozAudioContext || window.oAudioContext || window.msAudioContext)
const audioContext = this.audioContext ? this.audioContext : new ContextClass()
let microphoneSource = null
// useNuxtApp().$logger.log('Handling playing update = ' + isPlaying) // true when playing
if (isPlaying) {
// Web Audio API is available.
// const musicAudioComponent = this.$refs[this.currentElement.title].audioElement
if (this.areAllNodesAvailable()) {
this.disconnectNodes()
const musicAudioElement = this.$refs[this.currentElement.title].$refs.audioElement
const musicGainValue = this.createdNodes.musicGain.value
const noiseGainValue = this.createdNodes.noiseGain.value
// Prepare the audio elements (unmute)
this.$refs[this.currentElement.title].$refs.audioElement.muted = false
this.$refs.Noise.$refs.audioElement.muted = false
// replace music node because all Nodes are there
this.createdNodes.musicSource = null
this.createdNodes.musicSource = audioContext.createMediaElementSource(musicAudioElement)
this.connectNodes()
if (musicGainValue > 0 && musicGainValue <= 1.0) {
this.createdNodes.musicGain.gain.linearRampToValueAtTime(musicGainValue || 1.0, audioContext.currentTime + 2)
}
if (noiseGainValue > 0 && noiseGainValue <= 1.0) {
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(noiseGainValue || 1.0, audioContext.currentTime + 3)
}
} else {
navigator.mediaDevices.getUserMedia({
audio: {
echoCancellation: false,
noiseSuppression: false,
autoGainControl: false
},
video: false
})
.then((micStream) => {
microphoneSource = this.audioContext.createMediaStreamSource(micStream)
this.createdNodes.microphoneSource ||= microphoneSource
// Return both audioContext and microphoneSource to the next link in the chain
return { audioContext, microphoneSource }
})
.then(({ audioContext, microphoneSource }) => {
// First RNBO device creation
return createRNBODevice(audioContext, this.noisePatch).then((noiseRNBODevice) => {
this.createdNodes.noiseRNBODevice ||= noiseRNBODevice
// Return al.then(() => {hen(() => necessary objects for the next step
return { audioContext, microphoneSource, noiseRNBODevice }
})
})
.then(({ audioContext, microphoneSource, noiseRNBODevice }) => {
// Second RNBO device creation
return createRNBODevice(audioContext, this.musicPatch).then((musicRNBODevice) => {
// Return all necessary objects for the final setup
this.createdNodes.musicRNBODevice ||= musicRNBODevice
return { audioContext, microphoneSource, noiseRNBODevice, musicRNBODevice }
})
}).then(({ audioContext, microphoneSource, noiseRNBODevice, musicRNBODevice }) => {
const musicElement = this.$refs[this.currentElement.title]
const musicAudioElement = musicElement.$refs.audioElement
// Ensure this.createdNodes is initialized
this.createdNodes ||= {}
// Assign nodes if they don't exist
this.createdNodes.microphoneSource ||= microphoneSource
this.createdNodes.musicDevice ||= musicRNBODevice.node
this.createdNodes.noiseDevice ||= noiseRNBODevice.node
this.createdNodes.noiseSource ||= audioContext.createMediaElementSource(this.$refs.Noise.$refs.audioElement)
this.createdNodes.musicSource ||= audioContext.createMediaElementSource(musicAudioElement)
this.createdNodes.musicSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.noiseSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.outputSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.musicGain ||= audioContext.createGain()
this.createdNodes.noiseGain ||= audioContext.createGain()
this.createdNodes.merger ||= audioContext.createChannelMerger(4)
musicAudioElement.muted = false
this.$refs.Noise.$refs.audioElement.muted = false
this.createdNodes.musicGain.gain.value = 0.0001
this.createdNodes.noiseGain.gain.value = 0.0001 // macht nichts
this.connectNodes()
setTimeout(() => {
this.playState = true
this.createdNodes.musicGain.gain.exponentialRampToValueAtTime(1, audioContext.currentTime + 3)
this.createdNodes.noiseGain.gain.exponentialRampToValueAtTime(1, audioContext.currentTime + 4)
}, 150)
})
.catch((_error) => {
this.disconnectNodes()
if (this.audioContext) {
this.audioContext.destination.disconnect()
// audioContext.destination = null
}
this.$toast.error('Oouh sorry! Error while setting up audio, please reload.')
})
}
} else {
this.disconnectNodes()
const newAudioContext = new AudioContext()
this.audioContext.close()
this.audioContext = newAudioContext
}
},
disconnectNodes () {
// Ensure this.createdNodes is defined and is an object
if (typeof this.createdNodes === 'object' && this.createdNodes !== null) {
Object.values(this.createdNodes).forEach((node) => {
// Check if the node exists and has a disconnect method
if (node && typeof node.disconnect === 'function') {
node.disconnect()
}
// Additional handling for specific types of nodes, if necessary
// For example, stopping an OscillatorNode
if (node instanceof OscillatorNode) {
node.stop()
}
})
this.playState = false
}
}
}
}
</script>
<style scoped>
.rnboplayer {
position: fixed;
width: 220px; /* Or specify a fixed width like 220px if you prefer */
max-width: 220px; /* This line might be redundant depending on your width strategy */
height: 100px;
display: inline-grid;
z-index: 2;
bottom: 11%;
left: 0;
right: 0;
margin-left: auto;
margin-right: auto;
}
</style>

View File

@ -1,399 +0,0 @@
<template>
<div>
<KeyboardPlayHandler />
<h2>Press Space to start, wait few seconds then the sound should start. You will hear the arifacts right after the music begins</h2>
<h2>These file contain both patches, use Gainsliders of AudioElement to control volume and allow to skip. AudioStore used for shared AudioContext</h2>
<div class="rnboplayer">
<div v-if="selectedAudioTitle">
{{ selectedAudio.title }}
<AudioElement
:ref="selectedAudio.title"
:key="selectedAudio.id"
:src="selectedAudio.src"
:title="selectedAudio.title"
@update:playing="handlePlayingUpdate"
@update:volume="changeMusicVolume"
>
<template #default="{}" />
<!-- Slot content for AudioElement, if needed -->
</AudioElement>
</div>
<AudioElement
:ref="noise.title"
:key="noise.id"
:src="noise.src"
:title="noise.title"
@update:volume="changeNoiseVolume"
>
<template #default="{}" />
<!-- Slot content for AudioElement, if needed -->
</AudioElement>
<div class="row">
<div class="col-auto">
<button @click="skip">
Skip
</button>
</div>
<div class="col-auto">
<button @click="resumeContext">
Resume Context
</button>
</div>
<div class="col-auto">
<button @click="stopContext">
Suspend Context
</button>
</div>
</div>
</div>
</div>
</template>
<script>
import AudioElement from '@/components/experiments/AudioElement.vue'
import importedMusicPatcher from '@/assets/patch/music_patch.export.json'
import importedNoisePatcher from '@/assets/patch/noise_patch.export.json'
import { createRNBODevice } from '@/lib/AudioFunctions'
import KeyboardPlayHandler from '~/archive/components/KeyboardPlayHandler.vue'
// import setupNodes from '@/components/Player/Nodes'
import { useAudioStore } from '~/stores/audio'
export default {
components: {
AudioElement,
KeyboardPlayHandler
},
data () {
return {
audioList: [
{ id: 1, title: 'Lagoon', src: window.location.origin + '/sounds/lagoon.m4a' },
{ id: 2, title: 'Forest', src: window.location.origin + '/sounds/forest.m4a' },
{ id: 3, title: 'Meadow', src: window.location.origin + '/sounds/meadow.m4a' },
{ id: 4, title: 'Tropics', src: window.location.origin + '/sounds/tropics.m4a' }
],
noise: { id: 5, title: 'Noise', src: window.location.origin + useRuntimeConfig().public.tracks.masking_src_mp3 },
selectedAudioTitle: '',
currentElement: {},
createdNodes: {},
musicPatch: importedMusicPatcher,
noisePatch: importedNoisePatcher,
audioContext: null,
playState: false
}
},
computed: {
selectedAudio () {
return this.audioList.find(audio => audio.title === this.selectedAudioTitle) || null
},
checkContextCompatibility () {
// Assume createdNodes is an object where each property is a node
const nodes = Object.values(this.createdNodes)
const incompatible = nodes.filter(node => node.context !== this.audioContext)
if (nodes.length === nodes.length - incompatible.length) { return true }
// // useNuxtApp().$logger.log(incompatible.length + '/' + nodes.length + ' sind inkompatibel')
this.handleIncompatibeError(incompatible)
return false
},
isPlaying () {
return this.playState
}
},
beforeUnmount () {
if (this.currentElement) {
this.currentElement.pause()
this.currentElement.src = ''
this.currentElement.load()
this.currentElement = null
}
this.disconnectNodes()
if (this.audioContext) { this.audioContext.close() }
this.audioContext = null
},
mounted () {
this.audioContext = useAudioStore().getContext()
// Example: Select 'Song Three' by default
this.selectAudioByTitle(this.audioList[0].title)
this.addUserNavigationHandling()
this.musicPatch = importedMusicPatcher
this.noisePatch = importedNoisePatcher
},
methods: {
changeNoiseVolume (volume) {
if (this.createdNodes.noiseGain) {
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(volume, this.createdNodes.noiseGain.context.currentTime + 0.30)
this.noiseGain = this.createdNodes.noiseGain.gain.value
}
},
changeMusicVolume (volume) {
if (this.createdNodes.musicGain) {
this.createdNodes.musicGain.gain.linearRampToValueAtTime(volume, this.createdNodes.musicGain.context.currentTime + 0.30)
this.musicGain = this.createdNodes.musicGain.gain.value
}
},
addUserNavigationHandling () {
if ('mediaSession' in navigator) {
// Set the handler for the next track action
navigator.mediaSession.setActionHandler('nexttrack', () => {
this.skip('next')
})
// Set the handler for the previous track action
navigator.mediaSession.setActionHandler('previoustrack', () => {
this.skip('previous')
})
}
},
getSong (currentTitle, direction) {
const index = this.audioList.findIndex(song => song.title === currentTitle)
let adjacentIndex = index + (direction === 'next' ? 1 : -1)
// Loop back to the first song if 'next' goes beyond the last index
if (adjacentIndex >= this.audioList.length) {
adjacentIndex = 0
} else if (adjacentIndex < 0) {
adjacentIndex = this.audioList.length - 1
}
return this.audioList[adjacentIndex]
},
skip (direction) {
const nextSong = this.getSong(this.selectedAudioTitle, direction)
this.selectAudioByTitle(nextSong.title)
},
resumeContext () {
if (this.audioContext.state === 'suspended') { this.audioContext.resume() } else {
// useNuxtApp().$logger.log('already resumed?')
}
},
stopContext () {
if (this.audioContext.state === 'running') { this.audioContext.suspend() } else {
// useNuxtApp().$logger.log('already suspended')
}
},
updateCurrentElement (title) {
this.currentElement = this.audioList.find(e => e.title === title)
},
selectAudioByTitle (title) {
this.selectedAudioTitle = title
// const audioElement = this.audioList.find(e => e.title === this.selectedAudioTitle)
this.currentElement = this.audioList[this.audioList.findIndex(e => e.title === title)]
// // useNuxtApp().$logger.log('currentElement: ', this.selectedAudioTitle)
},
areAllNodesAvailable () {
// List of required node keys as you've named them in this.createdNodes
const requiredNodes = [
'musicSource', 'musicSplitter', 'microphoneSource',
'musicDevice', 'outputSplitter', 'musicGain',
'noiseSource', 'noiseSplitter', 'noiseDevice',
'noiseGain', 'merger'
]
// Check if each required node exists and is not undefined in this.createdNodes
return requiredNodes.every(nodeKey => this.createdNodes[nodeKey] !== undefined)
},
handleIncompatibeError (incompatible) {
if (incompatible) {
const node = incompatible.pop()
if (node.context !== this.createdNodes.musicSplitter.context) {
const audioContext = this.createdNodes.musicSplitter.context
this.createdNodes.musicSource ||= audioContext.createMediaElementSource(musicAudioElement)
}
} else {
// useNuxtApp().$logger.log('no error to solve')
}
},
connectNodes () {
try {
// Destructure for easier access
const {
musicSource, musicSplitter, microphoneSource,
musicDevice, outputSplitter, musicGain,
noiseSource, noiseSplitter, noiseDevice,
noiseGain, merger
} = this.createdNodes
// Check context compatibility
if (!this.checkContextCompatibility) {
if (!this.checkContextCompatibility) { throw new Error('Incompatible audio context among nodes.') }
}
// Connect music source to splitter (Stereo to 2 Channels)
musicSource.connect(musicSplitter)
// Connect microphone to musicDevice input 0 (Mono)
microphoneSource.connect(musicDevice, 0, 0)
// Connect musicSplitter to musicDevice inputs 1 and 2 (Stereo)
musicSplitter.connect(musicDevice, 0, 1)
musicSplitter.connect(musicDevice, 1, 2)
// Connect musicDevice to outputSplitter (Stereo out)
musicDevice.connect(outputSplitter)
// Optionally connect musicDevice to musicGain for additional gain control (Stereo)
musicDevice.connect(musicGain)
// Connect noise source to noiseSplitter (Stereo to 2 Channels)
noiseSource.connect(noiseSplitter)
// Connect microphone to noiseDevice input 0 (Mono)
microphoneSource.connect(noiseDevice, 0, 0)
// Connect noiseSplitter to noiseDevice inputs 1 and 2 (Stereo)
noiseSplitter.connect(noiseDevice, 0, 1)
noiseSplitter.connect(noiseDevice, 1, 2)
// Connect outputSplitter to noiseDevice inputs 3 and 4 (Stereo from musicDevice)
outputSplitter.connect(noiseDevice, 0, 3)
outputSplitter.connect(noiseDevice, 1, 4)
// Assuming noiseDevice outputs stereo, connect to noiseGain (Stereo)
noiseDevice.connect(noiseGain)
// Assuming you want to merge and output both processed signals (Stereo from both musicGain and noiseGain)
// Connect noiseGain to the first two inputs of the merger (Stereo)
noiseGain.connect(merger, 0, 0)
noiseGain.connect(merger, 0, 1)
// Connect musicGain to the last two inputs of the merger (Stereo)
musicGain.connect(merger, 0, 2)
musicGain.connect(merger, 0, 3)
// Finally, connect the merger to the audio context's destination (Stereo to output)
merger.connect(merger.context.destination)
} catch (e) {
// useNuxtApp().$logger.error('Failed to connect nodes: ', e.message)
// Additional error handling can be implemented here if necessary
}
},
handlePlayingUpdate (isPlaying) {
const audioContext = this.audioContext ? this.audioContext : useAudioStore().getContext()
let microphoneSource = null
// // useNuxtApp().$logger.log('Handling playing update = ' + isPlaying) // true when playing
if (isPlaying) {
// Web Audio API is available.
// const musicAudioComponent = this.$refs[this.currentElement.title].audioElement
if (this.areAllNodesAvailable()) {
this.disconnectNodes()
const musicAudioElement = this.$refs[this.currentElement.title].$refs.audioElement
const musicGainValue = this.createdNodes.musicGain.gain.value
const noiseGainValue = this.createdNodes.noiseGain.gain.value
// Prepare the audio elements (unmute)
this.$refs[this.currentElement.title].$refs.audioElement.muted = false
this.$refs.Noise.$refs.audioElement.muted = false
// replace music node because all Nodes are there
this.createdNodes.musicSource = null
this.createdNodes.musicSource = audioContext.createMediaElementSource(musicAudioElement)
this.connectNodes()
if (musicGainValue > 0 && musicGainValue <= 1.0) {
this.createdNodes.musicGain.gain.linearRampToValueAtTime(musicGainValue || 1.0, audioContext.currentTime + 2)
}
if (noiseGainValue > 0 && noiseGainValue <= 1.0) {
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(noiseGainValue || 1.0, audioContext.currentTime + 3)
}
this.audioContext.resume()
} else {
navigator.mediaDevices.getUserMedia({
audio: {
echoCancellation: false,
noiseSuppression: false,
autoGainControl: false
},
video: false
})
.then((micStream) => {
microphoneSource = this.audioContext.createMediaStreamSource(micStream)
this.createdNodes.microphoneSource ||= microphoneSource
// Return both audioContext and microphoneSource to the next link in the chain
return { audioContext, microphoneSource }
})
.then(({ audioContext, microphoneSource }) => {
// First RNBO device creation
return createRNBODevice(audioContext, this.noisePatch).then((noiseRNBODevice) => {
this.createdNodes.noiseRNBODevice ||= noiseRNBODevice
// Return al.then(() => {hen(() => necessary objects for the next step
return { audioContext, microphoneSource, noiseRNBODevice }
})
})
.then(({ audioContext, microphoneSource, noiseRNBODevice }) => {
// Second RNBO device creation
return createRNBODevice(audioContext, this.musicPatch).then((musicRNBODevice) => {
// Return all necessary objects for the final setup
this.createdNodes.musicRNBODevice ||= musicRNBODevice
return { audioContext, microphoneSource, noiseRNBODevice, musicRNBODevice }
})
}).then(({ audioContext, microphoneSource, noiseRNBODevice, musicRNBODevice }) => {
const musicElement = this.$refs[this.currentElement.title]
const musicAudioElement = musicElement.$refs.audioElement
// Ensure this.createdNodes is initialized
this.createdNodes ||= {}
// Assign nodes if they don't exist
this.createdNodes.microphoneSource ||= microphoneSource
this.createdNodes.musicDevice ||= musicRNBODevice.node
this.createdNodes.noiseDevice ||= noiseRNBODevice.node
this.createdNodes.noiseSource ||= audioContext.createMediaElementSource(this.$refs.Noise.$refs.audioElement)
this.createdNodes.musicSource ||= audioContext.createMediaElementSource(musicAudioElement)
this.createdNodes.musicSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.noiseSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.outputSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.musicGain ||= audioContext.createGain()
this.createdNodes.noiseGain ||= audioContext.createGain()
this.createdNodes.merger ||= audioContext.createChannelMerger(4)
musicAudioElement.muted = false
this.$refs.Noise.$refs.audioElement.muted = false
this.createdNodes.musicGain.gain.value = 0.0001
this.createdNodes.noiseGain.gain.value = 0.0001 // macht nichts
this.connectNodes()
setTimeout(() => {
this.playState = true
this.createdNodes.musicGain.gain.exponentialRampToValueAtTime(1, audioContext.currentTime + 3)
this.createdNodes.noiseGain.gain.exponentialRampToValueAtTime(1, audioContext.currentTime + 4)
if (audioContext.state === 'suspended') { this.resumeContext() }
}, 150)
})
.catch((_error) => {
this.disconnectNodes()
if (this.audioContext) {
this.audioContext.destination.disconnect()
// audioContext.destination = null
}
this.$toast.error('Oouh sorry! Error while setting up audio, please reload.')
})
}
} else {
// this.audioContext.destination.disconnect()
this.audioContext.suspend()
// this.disconnectNodes()
// const newAudioContext = useAudioStore().getContext()
// this.audioContext.close()
// this.audioContext = newAudioContext
}
},
disconnectNodes () {
// Ensure this.createdNodes is defined and is an object
if (typeof this.createdNodes === 'object' && this.createdNodes !== null) {
Object.values(this.createdNodes).forEach((node) => {
// Check if the node exists and has a disconnect method
if (node && typeof node.disconnect === 'function') {
node.disconnect()
}
// Additional handling for specific types of nodes, if necessary
// For example, stopping an OscillatorNode
if (node instanceof OscillatorNode) {
node.stop()
}
})
this.playState = false
}
}
}
}
</script>
<style scoped>
.rnboplayer {
position: fixed;
width: 220px; /* Or specify a fixed width like 220px if you prefer */
max-width: 220px; /* This line might be redundant depending on your width strategy */
height: 100px;
display: inline-grid;
z-index: 2;
bottom: 11%;
left: 0;
right: 0;
margin-left: auto;
margin-right: auto;
}
</style>

View File

@ -1,395 +0,0 @@
<template>
<div>
<KeyboardPlayHandler />
<h2>You will hear the arifacts right after the music begins</h2>
<h2>
These file contain both patches, use Gainsliders of AudioElement to control volume and allow to skip. AudioStore used for shared AudioContext.
Music is connected directlly to the noise patch,
way less glitches
</h2>
<div class="rnboplayer">
<div v-if="selectedAudioTitle">
{{ selectedAudio.title }}
<AudioElement
:ref="selectedAudio.title"
:key="selectedAudio.id"
:src="selectedAudio.src"
:title="selectedAudio.title"
@update:playing="handlePlayingUpdate"
@update:volume="changeMusicVolume"
>
<template #default="{}" />
<!-- Slot content for AudioElement, if needed -->
</AudioElement>
</div>
<AudioElement
:ref="noise.title"
:key="noise.id"
:src="noise.src"
:title="noise.title"
@update:volume="changeNoiseVolume"
>
<template #default="{}" />
<!-- Slot content for AudioElement, if needed -->
</AudioElement>
<div class="row">
<div class="col-auto">
<button @click="skip">
Skip
</button>
</div>
<div class="col-auto">
<button @click="resumeContext">
Resume Context
</button>
</div>
<div class="col-auto">
<button @click="stopContext">
Suspend Context
</button>
</div>
</div>
</div>
</div>
</template>
<script>
import AudioElement from '@/components/experiments/AudioElement.vue'
import importedMusicPatcher from '@/assets/patch/music_patch.export.json'
import importedNoisePatcher from '@/assets/patch/noise_patch.export.json'
import { createRNBODevice } from '@/lib/AudioFunctions'
import KeyboardPlayHandler from '~/archive/components/KeyboardPlayHandler.vue'
// import setupNodes from '@/components/Player/Nodes'
import { useAudioStore } from '~/stores/audio'
export default {
components: {
AudioElement,
KeyboardPlayHandler
},
data () {
return {
audioList: [
{ id: 1, title: 'Lagoon', src: window.location.origin + '/sounds/lagoon.m4a' },
{ id: 2, title: 'Forest', src: window.location.origin + '/sounds/forest.m4a' },
{ id: 3, title: 'Meadow', src: window.location.origin + '/sounds/meadow.m4a' },
{ id: 4, title: 'Tropics', src: window.location.origin + '/sounds/tropics.m4a' }
],
noise: { id: 5, title: 'Noise', src: window.location.origin + '/masking/noise.flac' },
selectedAudioTitle: '',
currentElement: {},
createdNodes: {},
musicPatch: importedMusicPatcher,
noisePatch: importedNoisePatcher,
audioContext: null,
playState: false
}
},
computed: {
selectedAudio () {
return this.audioList.find(audio => audio.title === this.selectedAudioTitle) || null
},
checkContextCompatibility () {
// Assume createdNodes is an object where each property is a node
const nodes = Object.values(this.createdNodes)
const incompatible = nodes.filter(node => node.context !== this.audioContext)
if (nodes.length === nodes.length - incompatible.length) { return true }
// // useNuxtApp().$logger.log(incompatible.length + '/' + nodes.length + ' sind inkompatibel')
this.handleIncompatibeError(incompatible)
return false
},
isPlaying () {
return this.playState
}
},
beforeUnmount () {
this.disconnectNodes()
if (this.audioContext) { this.audioContext.close() }
this.audioContext = null
},
mounted () {
this.audioContext = useAudioStore().getContext()
// Example: Select 'Song Three' by default
this.selectAudioByTitle(this.audioList[0].title)
this.addUserNavigationHandling()
this.musicPatch = importedMusicPatcher
this.noisePatch = importedNoisePatcher
},
methods: {
changeNoiseVolume (volume) {
if (this.createdNodes.noiseGain) {
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(volume, this.createdNodes.noiseGain.context.currentTime + 0.30)
this.noiseGain = this.createdNodes.noiseGain.gain.value
}
},
changeMusicVolume (volume) {
if (this.createdNodes.musicGain) {
this.createdNodes.musicGain.gain.linearRampToValueAtTime(volume, this.createdNodes.musicGain.context.currentTime + 0.30)
this.musicGain = this.createdNodes.musicGain.gain.value
}
},
addUserNavigationHandling () {
if ('mediaSession' in navigator) {
// Set the handler for the next track action
navigator.mediaSession.setActionHandler('nexttrack', () => {
this.skip('next')
})
// Set the handler for the previous track action
navigator.mediaSession.setActionHandler('previoustrack', () => {
this.skip('previous')
})
}
},
getSong (currentTitle, direction) {
const index = this.audioList.findIndex(song => song.title === currentTitle)
let adjacentIndex = index + (direction === 'next' ? 1 : -1)
// Loop back to the first song if 'next' goes beyond the last index
if (adjacentIndex >= this.audioList.length) {
adjacentIndex = 0
} else if (adjacentIndex < 0) {
adjacentIndex = this.audioList.length - 1
}
return this.audioList[adjacentIndex]
},
skip (direction) {
const nextSong = this.getSong(this.selectedAudioTitle, direction)
this.selectAudioByTitle(nextSong.title)
},
resumeContext () {
if (this.audioContext.state === 'suspended') { this.audioContext.resume() } else {
// useNuxtApp().$logger.log('already resumed?')
}
},
stopContext () {
if (this.audioContext.state === 'running') { this.audioContext.suspend() } else {
// useNuxtApp().$logger.log('already suspended')
}
},
updateCurrentElement (title) {
this.currentElement = this.audioList.find(e => e.title === title)
},
selectAudioByTitle (title) {
this.selectedAudioTitle = title
// const audioElement = this.audioList.find(e => e.title === this.selectedAudioTitle)
this.currentElement = this.audioList[this.audioList.findIndex(e => e.title === title)]
// // useNuxtApp().$logger.log('currentElement: ', this.selectedAudioTitle)
},
areAllNodesAvailable () {
// List of required node keys as you've named them in this.createdNodes
const requiredNodes = [
'musicSource', 'musicSplitter', 'microphoneSource',
'musicDevice', 'outputSplitter', 'musicGain',
'noiseSource', 'noiseSplitter', 'noiseDevice',
'noiseGain', 'merger'
]
// Check if each required node exists and is not undefined in this.createdNodes
return requiredNodes.every(nodeKey => this.createdNodes[nodeKey] !== undefined)
},
handleIncompatibeError (incompatible) {
if (incompatible) {
const node = incompatible.pop()
if (node.context !== this.createdNodes.musicSplitter.context) {
const audioContext = this.createdNodes.musicSplitter.context
this.createdNodes.musicSource ||= audioContext.createMediaElementSource(musicAudioElement)
}
} else {
// useNuxtApp().$logger.log('no error to solve')
}
},
connectNodes () {
try {
// Destructure for easier access
const {
musicSource, microphoneSource,
musicGain,
noiseSource, noiseSplitter, noiseDevice,
noiseGain, merger
} = this.createdNodes
// Check context compatibility
if (!this.checkContextCompatibility) {
if (!this.checkContextCompatibility) { throw new Error('Incompatible audio context among nodes.') }
}
// Connect music source to splitter (Stereo to 2 Channels)
musicSource.connect(musicGain)
// Connect microphone to musicDevice input 0 (Mono)
// Connect musicDevice to outputSplitter (Stereo out)
// musicDevice.connect(outputSplitter)
// Optionally connect musicDevice to musicGain for additional gain control (Stereo)
// musicDevice.connect(musicGain)
// Connect noise source to noiseSplitter (Stereo to 2 Channels)
noiseSource.connect(noiseSplitter)
// Connect microphone to noiseDevice input 0 (Mono)
microphoneSource.connect(noiseDevice, 0, 0)
// Connect noiseSplitter to noiseDevice inputs 1 and 2 (Stereo)
noiseSplitter.connect(noiseDevice, 0, 1)
noiseSplitter.connect(noiseDevice, 1, 2)
// Connect outputSplitter to noiseDevice inputs 3 and 4 (Stereo from musicDevice)
// const rndSrc = noiseDevice.context.createConstantSource()
// const rndSrc2 = noiseDevice.context.createConstantSource()
// rndSrc.connect(noiseDevice, 0, 3)
// rndSrc2.connect(noiseDevice, 0, 4)
// Assuming noiseDevice outputs stereo, connect to noiseGain (Stereo)
noiseDevice.connect(noiseGain)
// Assuming you want to merge and output both processed signals (Stereo from both musicGain and noiseGain)
// Connect noiseGain to the first two inputs of the merger (Stereo)
noiseGain.connect(merger, 0, 0)
noiseGain.connect(merger, 0, 1)
// Connect musicGain to the last two inputs of the merger (Stereo)
musicGain.connect(merger, 0, 2)
musicGain.connect(merger, 0, 3)
// Finally, connect the merger to the audio context's destination (Stereo to output)
merger.connect(merger.context.destination)
} catch (e) {
// useNuxtApp().$logger.error('Failed to connect nodes: ', e.message)
// Additional error handling can be implemented here if necessary
}
},
handlePlayingUpdate (isPlaying) {
const audioContext = this.audioContext ? this.audioContext : useAudioStore().getContext()
let microphoneSource = null
// // useNuxtApp().$logger.log('Handling playing update = ' + isPlaying) // true when playing
if (isPlaying) {
// Web Audio API is available.
// const musicAudioComponent = this.$refs[this.currentElement.title].audioElement
if (this.areAllNodesAvailable()) {
this.disconnectNodes()
const musicAudioElement = this.$refs[this.currentElement.title].$refs.audioElement
const musicGainValue = this.createdNodes.musicGain.gain.value
const noiseGainValue = this.createdNodes.noiseGain.gain.value
// Prepare the audio elements (unmute)
this.$refs[this.currentElement.title].$refs.audioElement.muted = false
this.$refs.Noise.$refs.audioElement.muted = false
// replace music node because all Nodes are there
this.createdNodes.musicSource = null
this.createdNodes.musicSource = audioContext.createMediaElementSource(musicAudioElement)
this.connectNodes()
if (musicGainValue > 0 && musicGainValue <= 1.0) {
this.createdNodes.musicGain.gain.linearRampToValueAtTime(musicGainValue || 1.0, audioContext.currentTime + 2)
}
if (noiseGainValue > 0 && noiseGainValue <= 1.0) {
this.createdNodes.noiseGain.gain.linearRampToValueAtTime(noiseGainValue || 1.0, audioContext.currentTime + 3)
}
this.audioContext.resume()
} else {
navigator.mediaDevices.getUserMedia({
audio: {
echoCancellation: false,
noiseSuppression: false,
autoGainControl: false
},
video: false
})
.then((micStream) => {
microphoneSource = this.audioContext.createMediaStreamSource(micStream)
this.createdNodes.microphoneSource ||= microphoneSource
// Return both audioContext and microphoneSource to the next link in the chain
return { audioContext, microphoneSource }
})
.then(({ audioContext, microphoneSource }) => {
// First RNBO device creation
return createRNBODevice(audioContext, this.noisePatch).then((noiseRNBODevice) => {
this.createdNodes.noiseRNBODevice ||= noiseRNBODevice
// Return al.then(() => {hen(() => necessary objects for the next step
return { audioContext, microphoneSource, noiseRNBODevice }
})
})
.then(({ audioContext, microphoneSource, noiseRNBODevice }) => {
// Second RNBO device creation
return createRNBODevice(audioContext, this.musicPatch).then((musicRNBODevice) => {
// Return all necessary objects for the final setup
this.createdNodes.musicRNBODevice ||= musicRNBODevice
return { audioContext, microphoneSource, noiseRNBODevice, musicRNBODevice }
})
}).then(({ audioContext, microphoneSource, noiseRNBODevice, musicRNBODevice }) => {
const musicElement = this.$refs[this.currentElement.title]
const musicAudioElement = musicElement.$refs.audioElement
// Ensure this.createdNodes is initialized
this.createdNodes ||= {}
// Assign nodes if they don't exist
this.createdNodes.microphoneSource ||= microphoneSource
this.createdNodes.musicDevice ||= musicRNBODevice.node
this.createdNodes.noiseDevice ||= noiseRNBODevice.node
this.createdNodes.noiseSource ||= audioContext.createMediaElementSource(this.$refs.Noise.$refs.audioElement)
this.createdNodes.musicSource ||= audioContext.createMediaElementSource(musicAudioElement)
this.createdNodes.musicSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.noiseSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.outputSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.musicGain ||= audioContext.createGain()
this.createdNodes.noiseGain ||= audioContext.createGain()
this.createdNodes.merger ||= audioContext.createChannelMerger(4)
musicAudioElement.muted = false
this.$refs.Noise.$refs.audioElement.muted = false
this.createdNodes.musicGain.gain.value = 0.0001
this.createdNodes.noiseGain.gain.value = 0.0001 // macht nichts
this.connectNodes()
setTimeout(() => {
this.playState = true
this.createdNodes.musicGain.gain.exponentialRampToValueAtTime(1, audioContext.currentTime + 3)
this.createdNodes.noiseGain.gain.exponentialRampToValueAtTime(1, audioContext.currentTime + 4)
if (audioContext.state === 'suspended') { this.resumeContext() }
}, 1000)
})
.catch((_error) => {
this.disconnectNodes()
if (this.audioContext) {
this.audioContext.destination.disconnect()
// audioContext.destination = null
}
this.$toast.error('Oouh sorry! Error while setting up audio, please reload.')
})
}
} else {
// this.audioContext.destination.disconnect()
this.audioContext.suspend()
// this.disconnectNodes()
// const newAudioContext = useAudioStore().getContext()
// this.audioContext.close()
// this.audioContext = newAudioContext
}
},
disconnectNodes () {
// Ensure this.createdNodes is defined and is an object
if (typeof this.createdNodes === 'object' && this.createdNodes !== null) {
Object.values(this.createdNodes).forEach((node) => {
// Check if the node exists and has a disconnect method
if (node && typeof node.disconnect === 'function') {
node.disconnect()
}
// Additional handling for specific types of nodes, if necessary
// For example, stopping an OscillatorNode
if (node instanceof OscillatorNode) {
node.stop()
}
})
this.playState = false
}
}
}
}
</script>
<style scoped>
.rnboplayer {
position: fixed;
width: 220px; /* Or specify a fixed width like 220px if you prefer */
max-width: 220px; /* This line might be redundant depending on your width strategy */
height: 100px;
display: inline-grid;
z-index: 2;
bottom: 11%;
left: 0;
right: 0;
margin-left: auto;
margin-right: auto;
}
</style>

View File

@ -1,455 +0,0 @@
<template>
<div class="rnboplayer">
<label class="switch">
<input type="checkbox" :v-model="deviceUsed">
<span class="slider" />
</label>
<div class="container-fluid">
<div class="row justify-content-center">
<div class="adaptive pb-3">
<div class="col-12 ">
<div class="d-none d-md-block mx-auto pb-1" style="width: 225px">
<div class="progress " style="height: 10px">
<div
class="progress-bar bar"
role="progressbar"
aria-label="Basic example"
:style="{width:bar_width+'%'}"
style="background-color: #e9c046;transition: 0.1s"
aria-valuenow="75"
aria-valuemin="0"
aria-valuemax="100"
/>
</div>
</div>
<div class="d-flex justify-content-center mb-1">
<nuxt-link to="#adaptive-modal" data-bs-target="#adaptive-modal" data-bs-toggle="modal" class="text-muted text-decoration-none fw-bold fs-6 ">
{{ t('Adaptive soundscape') }} : <span class="" style="color: #e9c046">{{ t('On') }}</span>
</nuxt-link><span class="ps-3"><i style="padding: 5px 0px;" class="fa-solid text-muted d-flex fa-chevron-right" /></span>
</div>
<div class="d-block d-md-none mx-auto pb-1" style="width: 225px">
<div class="progress " style="height: 10px">
<div
class="progress-bar bar"
role="progressbar"
aria-label="Basic example"
:style="{width:bar_width+'%'}"
style="background-color: #e9c046;transition: 0.1s"
aria-valuenow="75"
aria-valuemin="0"
aria-valuemax="100"
/>
</div>
</div>
</div>
<BootomBar />
</div>
</div>
</div>
<div v-if="selectedAudioTitle">
{{ selectedAudio.title }}
<AudioElement
:ref="selectedAudio.title"
:key="selectedAudio.id"
:src="selectedAudio.src"
:title="selectedAudio.title"
@update:playing="handlePlayingUpdate"
@volume="changeMusicVolume"
>
<template #default="{ }" />
<!-- Slot content for AudioElement, if needed -->
</AudioElement>
</div>
<AudioElement
:ref="noise.title"
:key="noise.id"
:src="noise.src"
:title="noise.title"
@update:playing="handlePlayingUpdate"
@volume="changeNoiseVolume"
>
<template #default="{}" />
<!-- Slot content for AudioElement, if needed -->
</AudioElement>
<button @click="skip">
Skip
</button>
</div>
</template>
<script>
import AudioElement from '@/components/experiments/AudioElement.vue'
import importedMusicPatcher from '@/assets/patch/music_patch.export.json'
import importedNoisePatcher from '@/assets/patch/noise_patch.export.json'
import { createRNBODevice } from '@/lib/AudioFunctions'
// import setupNodes from '@/components/Player/Nodes'
// import { useAudioStore } from '~/stores/audio'
export default {
components: {
AudioElement
},
data () {
return {
t: useI18n,
audioList: [
{ id: 1, title: 'Lagoon', src: window.location.origin + '/sounds/lagoon.m4a' },
{ id: 2, title: 'Forest', src: window.location.origin + '/sounds/forest.m4a' },
{ id: 3, title: 'Meadow', src: window.location.origin + '/sounds/meadow.m4a' },
{ id: 4, title: 'Tropics', src: window.location.origin + '/sounds/tropics.m4a' }
],
noise: { id: 5, title: 'Noise', src: window.location.origin + '/masking/noise.flac' },
selectedAudioTitle: '',
currentElement: {},
createdNodes: {},
musicPatch: importedMusicPatcher,
noisePatch: importedNoisePatcher,
audioContext: null,
isPlayerRunning: false,
checked: false,
// meter
meter: null,
bar_width: 0,
analyser: null,
dataArray: null,
bar_val: 25
}
},
computed: {
selectedAudio () {
return this.audioList.find(audio => audio.title === this.selectedAudioTitle) || null
},
deviceUsed () {
return this.isPlayerRunning
}
},
beforeUnmount () {
this.disconnectNodes()
if (this.audioContext) { this.audioContext.close() }
this.audioContext = null
},
mounted () {
this.audioContext = new AudioContext()
this.currentElement = this.audioList[0]
// Example: Select 'Song Three' by default
this.selectAudioByTitle(this.audioList[0].title)
this.addUserNavigationHandling()
this.musicPatch = importedMusicPatcher
this.noisePatch = importedNoisePatcher
},
methods: {
changeMusicVolume (newValue) {
if (!this.createdNodes.musicGain.gain) { return }
// useNuxtApp().$logger.log(this.createdNodes.musicGain.gain)
this.createdNodes.musicGain.gain.cancelScheduledValues(this.createdNodes.musicGain.context.currentTime)
this.createdNodes.musicGain.gain.setValueAtTime(newValue / 100, this.createdNodes.musicGain.context.currentTime + 0.01)
},
changeNoiseVolume (newValue) {
if (!this.createdNodes.noiseGain.gain) { return }
this.createdNodes.noiseGain.gain.setValueAtTime(newValue / 100, this.createdNodes.noiseGain.context.currentTime + 0.01)
// this.createdNodes.noiseGain.gain.value = newValue / 100
},
addUserNavigationHandling () {
if ('mediaSession' in navigator) {
// Set the handler for the next track action
navigator.mediaSession.setActionHandler('nexttrack', () => {
this.skip('next')
})
// Set the handler for the previous track action
navigator.mediaSession.setActionHandler('previoustrack', () => {
this.skip('previous')
})
}
},
getSong (currentTitle, direction) {
const index = this.audioList.findIndex(song => song.title === currentTitle)
let adjacentIndex = index + (direction === 'next' ? 1 : -1)
// Loop back to the first song if 'next' goes beyond the last index
if (adjacentIndex >= this.audioList.length) {
adjacentIndex = 0
} else if (adjacentIndex < 0) {
adjacentIndex = this.audioList.length - 1
}
return this.audioList[adjacentIndex]
},
skip (direction) {
const nextSong = this.getSong(this.selectedAudioTitle, direction)
this.selectAudioByTitle(nextSong.title)
},
updateCurrentElement (title) {
this.currentElement = this.audioList.find(e => e.title === title)
},
selectAudioByTitle (title) {
this.selectedAudioTitle = title
// const audioElement = this.audioList.find(e => e.title === this.selectedAudioTitle)
this.currentElement = this.audioList[this.audioList.findIndex(e => e.title === title)]
// useNuxtApp().$logger.log('currentElement: ', this.selectedAudioTitle)
},
areAllNodesAvailable () {
// List of required node keys as you've named them in this.createdNodes
const requiredNodes = [
'musicSource', 'musicSplitter', 'microphoneSource',
'musicDevice', 'outputSplitter', 'musicGain',
'noiseSource', 'noiseSplitter', 'noiseDevice',
'noiseGain', 'merger'
]
// Check if each required node exists and is not undefined in this.createdNodes
return requiredNodes.every(nodeKey => this.createdNodes[nodeKey] !== undefined)
},
updateMeter () {
requestAnimationFrame(this.updateMeter)
this.analyser.getByteFrequencyData(this.dataArray)
const rms = this.getRMS(this.dataArray)
let level = 20 * Math.log10(rms / 128)
level = Math.max(0, Math.min(100, level + 100))
// bar.style.width = level + '%';
this.bar_width = level
},
getRMS (dataArray) {
let rms = 0
for (let i = 0; i < dataArray.length; i++) {
rms += dataArray[i] * dataArray[i]
}
rms /= dataArray.length
rms = Math.sqrt(rms)
return rms
},
connectNodes () {
// Destructure for easier access
const audioCtx = this.createdNodes.microphoneSource.context
this.analyser = audioCtx.createAnalyser()
this.createdNodes.microphoneSource.connect(this.analyser)
this.analyser.fftSize = 2048
const bufferLength = this.analyser.frequencyBinCount
this.dataArray = new Uint8Array(bufferLength)
this.updateMeter()
const {
musicSource, musicSplitter, microphoneSource,
musicDevice, outputSplitter, musicGain,
noiseSource, noiseSplitter, noiseDevice,
noiseGain, merger
} = this.createdNodes
// Assuming all nodes are created and references to them are correct
// Connect music source to splitter (Stereo to 2 Channels)
musicSource.connect(musicSplitter) // 2 channels: L, R
// Connect microphone to musicDevice input 0 (Mono)
microphoneSource.connect(musicDevice, 0, 0) // 1 channel: Microphone
// Connect musicSplitter to musicDevice inputs 1 and 2 (Stereo)
musicSplitter.connect(musicDevice, 0, 1) // 1 channel: Left
musicSplitter.connect(musicDevice, 1, 2) // 1 channel: Right
// Connect musicDevice to outputSplitter (Stereo out)
musicDevice.connect(outputSplitter) // Assuming musicDevice outputs stereo
// Optionally connect musicDevice to musicGain for additional gain control (Stereo)
musicDevice.connect(musicGain) // Assuming musicDevice outputs stereo, connected to both channels of musicGain
// Connect noise source to noiseSplitter (Stereo to 2 Channels)
noiseSource.connect(noiseSplitter) // 2 channels: L, R
// Connect microphone to noiseDevice input 0 (Mono)
microphoneSource.connect(noiseDevice, 0, 0) // 1 channel: Microphone
// Connect noiseSplitter to noiseDevice inputs 1 and 2 (Stereo)
noiseSplitter.connect(noiseDevice, 0, 1) // 1 channel: Left
noiseSplitter.connect(noiseDevice, 1, 2) // 1 channel: Right
// Connect outputSplitter to noiseDevice inputs 3 and 4 (Stereo from musicDevice)
outputSplitter.connect(noiseDevice, 0, 3) // 1 channel: Left from musicDevice
outputSplitter.connect(noiseDevice, 1, 4) // 1 channel: Right from musicDevice
// Assuming noiseDevice outputs stereo, connect to noiseGain (Stereo)
noiseDevice.connect(noiseGain) // Assuming noiseDevice outputs stereo, connected to both channels of noiseGain
// Assuming you want to merge and output both processed signals (Stereo from both musicGain and noiseGain)
// Connect noiseGain to the first two inputs of the merger (Stereo)
noiseGain.connect(merger, 0, 0) // 1 channel: Left
noiseGain.connect(merger, 0, 1) // 1 channel: Right
// Connect musicGain to the last two inputs of the merger (Stereo)
musicGain.connect(merger, 0, 2) // 1 channel: Left
musicGain.connect(merger, 0, 3) // 1 channel: Right
// Finally, connect the merger to the audio context's destination (Stereo to output)
merger.connect(merger.context.destination)
},
handlePlayingUpdate (isPlaying) {
const ContextClass = (window.AudioContext || window.webkitAudioContext || window.mozAudioContext || window.oAudioContext || window.msAudioContext)
let audioContext = this.audioContext ? this.audioContext : new ContextClass()
let microphoneSource = null
// useNuxtApp().$logger.log('Handling playing update = ' + isPlaying) // true when playing
if (isPlaying) {
// Web Audio API is available.
// const musicAudioComponent = this.$refs[this.currentElement.title].audioElement
if (this.areAllNodesAvailable()) {
this.disconnectNodes()
// reconnecting because all Nodes are there
this.connectNodes()
this.createdNodes.noiseGain.gain.exponentialRampToValueAtTime(1.0, audioContext.currentTime + 2)
this.createdNodes.noiseGain.gain.exponentialRampToValueAtTime(1.0, audioContext.currentTime + 3)
// useNuxtApp().$logger.log('Connected everything because it was there already')
} else {
navigator.mediaDevices.getUserMedia({
audio: {
echoCancellation: false,
noiseSuppression: false,
autoGainControl: false
},
video: false
})
.then((micStream) => {
audioContext = new ContextClass()
microphoneSource = audioContext.createMediaStreamSource(micStream)
this.createdNodes.microphoneSource ||= microphoneSource
// Return both audioContext and microphoneSource to the next link in the chain
// useNuxtApp().$logger.log('Step 1 Micstream created')
return { audioContext, microphoneSource }
})
.then(({ audioContext, microphoneSource }) => {
// First RNBO device creation
return createRNBODevice(audioContext, this.noisePatch).then((noiseRNBODevice) => {
this.createdNodes.noiseRNBODevice ||= noiseRNBODevice
// Return al.then(() => {hen(() => necessary objects for the next step
// useNuxtApp().$logger.log('Step 2 AudioContext, NoiseDevice und MicSource created')
return { audioContext, microphoneSource, noiseRNBODevice }
})
})
.then(({ audioContext, microphoneSource, noiseRNBODevice }) => {
// Second RNBO device creation
return createRNBODevice(audioContext, this.musicPatch).then((musicRNBODevice) => {
// Return all necessary objects for the final setup
this.createdNodes.musicRNBODevice ||= musicRNBODevice
// useNuxtApp().$logger.log('Step 3 AudioContext, NoiseDevice, musicDevice und MicSource created')
return { audioContext, microphoneSource, noiseRNBODevice, musicRNBODevice }
})
}).then(({ audioContext, microphoneSource, noiseRNBODevice, musicRNBODevice }) => {
// Ensure this.createdNodes is initialized
this.createdNodes ||= {}
// Assign nodes if they don't exist
this.createdNodes.microphoneSource ||= microphoneSource
this.createdNodes.musicDevice ||= musicRNBODevice.node
this.createdNodes.noiseDevice ||= noiseRNBODevice.node
this.createdNodes.noiseSource ||= audioContext.createMediaElementSource(this.$refs.Noise.$refs.audioElement)
this.createdNodes.musicSource ||= audioContext.createMediaElementSource(this.$refs[this.currentElement.title].$refs.audioElement)
this.createdNodes.musicSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.noiseSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.outputSplitter ||= audioContext.createChannelSplitter(2)
this.createdNodes.musicGain ||= audioContext.createGain()
this.createdNodes.noiseGain ||= audioContext.createGain()
this.createdNodes.merger ||= audioContext.createChannelMerger(4)
this.createdNodes.musicGain.gain.value = 0.001
this.createdNodes.noiseGain.gain.value = 0.001 // macht nichts
this.connectNodes()
this.createdNodes.musicGain.gain.exponentialRampToValueAtTime(0.5, audioContext.currentTime + 3)
this.createdNodes.noiseGain.gain.exponentialRampToValueAtTime(0.5, audioContext.currentTime + 4)
})
.catch((_error) => {
// this.disconnectNodes()
if (this.audioContext) {
this.audioContext.destination.disconnect()
// audioContext.destination = null
}
})
}
} else {
this.disconnectNodes()
}
},
disconnectNodes () {
// Ensure this.createdNodes is defined and is an object
if (this.createdNodes !== null) {
Object.values(this.createdNodes).forEach((node) => {
// Check if the node exists and has a disconnect method
if (node && typeof node.disconnect === 'function') {
node.disconnect()
}
// Additional handling for specific types of nodes, if necessary
// For example, stopping an OscillatorNode
if (node instanceof OscillatorNode) {
node.stop()
}
})
}
}
}
}
</script>
<style scoped>
.rnboplayer {
position: fixed;
width: 220px; /* Or specify a fixed width like 220px if you prefer */
max-width: 220px; /* This line might be redundant depending on your width strategy */
height: 100px;
display: inline-grid;
z-index: 2;
bottom: 11%;
left: 0;
right: 0;
margin-left: auto;
margin-right: auto;
}
/* The switch - the box around the slider */
.switch {
position: relative;
display: inline-block;
width: 60px;
height: 34px;
}
/* Hide default HTML checkbox */
.switch input {
opacity: 0;
width: 0;
height: 0;
}
/* The slider */
.slider {
position: absolute;
cursor: pointer;
top: 0;
left: 0;
right: 0;
bottom: 0;
background-color: #ccc;
-webkit-transition: .4s;
transition: .4s;
}
.slider:before {
position: absolute;
content: "";
height: 26px;
width: 26px;
left: 4px;
bottom: 4px;
background-color: white;
-webkit-transition: .4s;
transition: .4s;
}
input:checked + .slider {
background-color: #2196F3;
}
input:focus + .slider {
box-shadow: 0 0 1px #2196F3;
}
input:checked + .slider:before {
-webkit-transform: translateX(26px);
-ms-transform: translateX(26px);
transform: translateX(26px);
}
</style>

File diff suppressed because one or more lines are too long

View File

@ -1,67 +0,0 @@
class BandpassProcessor extends AudioWorkletProcessor {
process (inputs, outputs) {
const input = inputs[0]
const output = outputs[0]
// const frequency = parameters.frequency
// const Q = parameters.Q
for (let channel = 0; channel < input.length; channel++) {
const inputChannel = input[channel]
// const outputChannel = output[channel]
for (let i = 0; i < inputChannel.length; i++) {
// Apply bandpass filter to inputChannel[i] and store the result in outputChannel[i]
// using the provided frequency and Q parameters
}
}
// Calculate the RMS value of the output audio data
const rms = this.calculateRMS(output)
// Calculate the dB values
const dbValues = this.convertToDB(output)
// Calculate the 10th and 90th percentile values
const percentile10 = this.calculatePercentile(dbValues, 10)
const percentile90 = this.calculatePercentile(dbValues, 90)
// Send the processed data to the main thread
this.port.postMessage({ rms, dbValues, percentile10, percentile90 })
return true
}
calculateRMS (data) {
let sumOfSquares = 0
for (let channel = 0; channel < data.length; channel++) {
const channelData = data[channel]
for (let i = 0; i < channelData.length; i++) {
sumOfSquares += channelData[i] * channelData[i]
}
}
const meanSquare = sumOfSquares / (data.length * data[0].length)
const rms = Math.sqrt(meanSquare)
return rms
}
calculatePercentile (data, percentile) {
const sortedData = data.slice().sort((a, b) => a - b)
const index = Math.floor((percentile / 100) * sortedData.length)
return sortedData[index]
}
convertToDB (data) {
const dbValues = []
for (let channel = 0; channel < data.length; channel++) {
const channelData = data[channel]
for (let i = 0; i < channelData.length; i++) {
const amplitude = Math.abs(channelData[i])
const db = 20 * Math.log10(amplitude + this.minAmplitude)
dbValues.push(db)
}
}
return dbValues
}
}
export default registerProcessor('bandpass-processor', BandpassProcessor)

View File

@ -1,394 +0,0 @@
@import 'bootstrap/dist/css/bootstrap.min.css';
@import "font-awesome/css/font-awesome.min.css";
@font-face {
font-family: 'Montserrat';
font-style: normal;
font-weight: 700;
src: local('Montserrat Bold'), local('Montserrat-Bold'),
url('/public/fonts/Montserrat_700_normal.woff') format('woff');
}
.bg-img{
background-position: center;
background-size: cover;
background-repeat: no-repeat;
/*background-image: url("../image/loginimg.png");*/
}
body{
overflow-x: hidden;
}
.form-control{
border-radius: 5px;
padding: 10px 10px;
}
.form-control:focus{
outline: none;
box-shadow: none;
border-color: rgb(233, 192, 70) !important;
box-shadow: 0 0 0 .25rem rgba(233,192,70,.25)!important;
}
.forgot-link{
/*text-decoration: none;*/
color: grey;
text-decoration: underline;
}
.login-btn{
background-color: #e9c046;
color: white;
font-size: 18px;
font-weight: bolder;
padding: 0.25em 0.75em;
border-radius: 10px;
border: 2px solid #e9c046;
transition: .25s ease-in-out;
}
.login-ins-btn{
background-color: white;
color: black;
font-size: 24px;
font-weight: bolder;
padding: 10px 10px;
border-radius: 10px;
border: none;
border: 1px solid black;
transition: .25s ease-in-out;
}
button[type="submit"]:focus-visible {
outline: none;
}
.login-btn:hover, .login-btn:focus-visible{
background-color: #fff;
border: 2px solid #e9c046;
color: #e9c046;
}
.signup-link{
color: #000;
text-decoration: underline;
transition: .2s ease-in-out;
font-weight: 600;
}
.signup-link:hover, .signup-link:focus-visible, .forgot-link:hover, .forgot-link:focus-visible{
color: #e9c046;
outline: none;
}
.forgot-link {
transition: .2s ease-in-out;
}
.accordion-button-homebar:not(.collapsed)::after {
background-position-y:1px ;
}
.accordion-button-homebar:not(.collapsed) {
color: black;
background-color: white;
}
.accordion-button-homebar:focus {
z-index: 3;
border-color: white;
outline: 0;
box-shadow: none;
}
.dropdown-menu {
min-width: 250px !important;
border: none;
box-shadow: 1px 0px 4px 0px rgba(0,0,0,0.10);
}
@media only screen and (max-width: 575px) {
p,h1,h2,h3,h4,h5,h6,span{
font-size: 15px;
}
.dropdown-menu {
min-width: 205px !important;
}
}
.doted-nav{
position: fixed;
bottom: 0;
}
.dropdown-item:focus, .dropdown-item:hover {
color: white !important;
background-color: #e9c046;
border-radius: 10px;
/*margin-top: 2px;*/
}
.dropdown-item.router-link-exact-active{
color: white !important;
background-color: #e9c046;
border-radius: 10px;
/*margin-top: 2px;*/
}
.nav-link{
color: black;
border-radius: 10px;
transition: 250ms ease-in-out;
}
.nav-link:hover{
background-color: #e9c046;
color: white !important;
border-radius: 10px;
}
.nav-link:hover svg path {
fill: white;
}
.nav-link:hover svg rect {
fill: white;
}
.nav-icons.router-link-exact-active{
fill: white;
background-color: #e9c046;
}
.nav-icons.router-link-active{
fill: white;
background-color: #e9c046;
}
.nav-icons.router-link-exact-active svg path{
fill: white;
}
.nav-icons.router-link-active svg path{
fill: white;
}
.checklabel123{
padding-top: 25px !important;
}
@media only screen and (max-width: 575px){
.checklabel{
height: 123px;
width: 134px !important;
padding-top: 14px !important;
}
.checklabel123{
padding-top: 16px !important;
}
}
.form-switch .form-check-input {
padding: 10px 16px;
background-color: white;
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%23fff'/%3e%3c/svg%3e");
}
.form-switch .form-check-input:focus-visible {
outline: #e9c046 auto 1px;
}
.form-switch__label {
max-width: 80% !important;
}
.accordion-button:not(.collapsed)::after {
background-image: var(--bs-accordion-btn-icon);
fill: white;
}
.bar{
background-color: #e9c046 !important;
}
.progress-bar {
background-color: #e9c046 !important;
}
.checklabel{
background: white !important;
}
.checklabel:hover{
background: #e9c046 !important;;
}
.btn {
min-width: 150px;
transition: 250ms ease-in-out;
}
.btn-primary-custom, .btn-primary {
background-color: #e9c046;
border-color: #e9c046;
color: white;
font-weight: 600 !important;
}
.btn-primary-custom:hover, .btn-primary-custom:focus, .btn-primary:hover, .btn-primary:focus{
background-color: transparent;
border-color: #e9c046;
color: #e9c046;
}
.btn-dark-custom {
background-color: #585C5E !important;
border-color: #585C5E !important;
color: white !important;
font-weight: 600 !important;
}
.btn-dark-custom:hover, .btn-dark-custom:focus {
background-color: transparent !important;
border-color: #585C5E !important;
color: #585C5E !important;
}
.btn-bullet {
min-width: unset;
width: 12px;
height: 12px;
border-radius: 50%;
background-color: #e9c046;
padding: 0;
border: none !important;
}
.btn-bullet:hover, .btn-bullet:focus {
background-color: rgba(0, 0, 0, 0.2) !important;
}
.btn-bullet.is-active {
border-radius: 10px;
width: 48px;
}
.btn-bullet.is-active:hover, .btn-bullet.is-active:focus {
background-color: #e9c046 !important;
}
.btn--icon {
min-width: unset;
}
.btn--light {
background-color: white;
color: #585C5E;
}
.btn--light:hover {
background-color: #e9c046;
color: white;
}
.btn--light:disabled {
background-color: white;
color: #585C5E;
opacity: 0.4;
border: none;
}
.btn--small {
min-width: 100px;
}
.btn--light[variant="outlined"] {
background-color: transparent;
border-color: #585C5E;
}
.btn--light[variant="outlined"]:hover {
background-color: white;
border-color: white;
color: #585C5E;
}
p{
max-width: 750px;
margin: 0;
}
.text-muted-dark {
color:#585C5E;
}
.onboarding-popover{
background-color: #f0f0f0 !important; /* dunkles Grau */
color: #585C5E !important; /* fast weiß */
border-radius: 12px !important;
box-shadow: 0 10px 20px rgba(0,0,0,0.2) !important;
max-width: 320px !important;
}
.driver-popover.onboarding-popover--last {
top: 50% !important;
left: 50% !important;
transform: translate(-50%, -50%);
height: auto;
min-width: 600px;
max-height: unset;
max-width: unset;
text-align: center;
padding: 1.5em;
}
.onboarding-popover--last img {
height: 240px;
}
.onboarding-popover--last .driver-popover-navigation-btns {
justify-content: center;
}
.driver-popover-footer button {
background-color: #e9c046 !important;
border-color: #e9c046 !important;
color: white !important;
font-weight: 700 !important;
font-size: 16px !important;
padding: 0.5rem 0.375rem !important;
text-shadow: none !important;
line-height: 16px !important;
border-radius: 6px !important;
transition: 250ms ease-in-out !important;
}
.driver-popover-footer button:hover {
background-color: transparent !important;
border-color: #e9c046 !important;
color: #e9c046 !important;
}
.onboarding-popover .driver-popover-title {
font-size: 1.125rem;
font-weight: 700;
margin-bottom: 0.75rem;
}
.onboarding-popover .driver-popover-description {
font-size: 1rem;
}
.onboarding-popover .driver-popover-close-btn {
color: #585C5E;
top: 15px;
right: 15px;
width: auto;
height: auto;
font-size: 22px;
line-height: 18px;
}
.onboarding-popover .driver-popover-footer button svg{
pointer-events: none;
}
.onboarding-popover .driver-btn {
color: white !important;
border-radius: 8px !important;
padding: 0.4rem 1rem;
border: none !important;
}
.onboarding-popover .driver-btn:hover {
background-color: #2563eb !important;
}
@media screen and (max-width: 768px) {
.driver-popover-arrow {
display: none;
}
}

View File

@ -1,7 +0,0 @@
@tailwind base;
@tailwind components;
@tailwind utilities;
body{
@apply bg-gray-50;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 562 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 864 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 342 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 475 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 457 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 768 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 333 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 333 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 503 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 496 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 530 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 723 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 518 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 370 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 668 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 569 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 342 B

View File

@ -1 +0,0 @@
<svg width="24" height="24" fill="none" xmlns="http://www.w3.org/2000/svg"><path d="M11.999 5.707a2.7 2.7 0 0 0-4.67-1.853 2.7 2.7 0 0 0-.726 1.966 3.6 3.6 0 0 0-2.273 5.192 3.6 3.6 0 0 0 .5 5.927 3.599 3.599 0 1 0 7.169.466V5.707Z" stroke="#585C5E" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/><path d="M11.999 5.707a2.699 2.699 0 1 1 5.396.113 3.599 3.599 0 0 1 2.273 5.192 3.6 3.6 0 0 1-.5 5.927 3.598 3.598 0 1 1-7.17.466V5.707Z" stroke="#585C5E" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/><path d="M14.698 12.906a4.049 4.049 0 0 1-2.7-3.6 4.05 4.05 0 0 1-2.699 3.6m7.738-5.849c.217-.377.34-.802.359-1.237m-10.793 0c.017.435.14.86.358 1.237M4.33 11.012c.164-.133.34-.253.526-.356m14.285 0c.186.103.362.223.527.356M6.6 17.405a3.6 3.6 0 0 1-1.77-.465m14.337 0a3.6 3.6 0 0 1-1.77.465" stroke="#585C5E" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/></svg>

Before

Width:  |  Height:  |  Size: 916 B

View File

@ -1 +0,0 @@
<svg width="24" height="24" fill="none" xmlns="http://www.w3.org/2000/svg"><path d="M11.999 5.707a2.7 2.7 0 0 0-4.67-1.853 2.7 2.7 0 0 0-.726 1.966 3.6 3.6 0 0 0-2.273 5.192 3.6 3.6 0 0 0 .5 5.927 3.599 3.599 0 1 0 7.169.466V5.707Z" stroke="#585C5E" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/><path d="M11.999 5.707a2.699 2.699 0 1 1 5.396.113 3.599 3.599 0 0 1 2.273 5.192 3.6 3.6 0 0 1-.5 5.927 3.598 3.598 0 1 1-7.17.466V5.707Z" stroke="#585C5E" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/><path d="M14.698 12.906a4.048 4.048 0 0 1-2.7-3.6 4.05 4.05 0 0 1-2.699 3.6m7.738-5.849c.217-.377.34-.802.359-1.237m-10.793 0c.017.435.14.86.358 1.237M4.33 11.012c.164-.133.34-.253.526-.356m14.285 0c.186.103.362.223.527.356M6.6 17.405a3.6 3.6 0 0 1-1.77-.465m14.337 0a3.6 3.6 0 0 1-1.77.465" stroke="#585C5E" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/><path d="M20.293 3 3 22.707" stroke="#585C5E" stroke-width="2" stroke-linecap="round"/></svg>

Before

Width:  |  Height:  |  Size: 1003 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 697 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.4 KiB

View File

@ -1 +0,0 @@
<svg width="24" height="24" fill="none" xmlns="http://www.w3.org/2000/svg"><path d="M8.286 18.875c0 .564-.226 1.104-.628 1.503a2.152 2.152 0 0 1-3.03 0 2.116 2.116 0 0 1 0-3.006 2.152 2.152 0 0 1 3.03 0c.402.399.628.94.628 1.503Zm0 0V6.479L19 4v12.396M8.286 10.73 19 8.25m0 8.5c0 .564-.226 1.104-.628 1.503a2.152 2.152 0 0 1-3.03 0 2.116 2.116 0 0 1 0-3.006 2.152 2.152 0 0 1 3.03 0c.402.399.628.94.628 1.503Z" stroke="#585C5E" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" /></svg>

Before

Width:  |  Height:  |  Size: 500 B

View File

@ -1 +0,0 @@
<svg width="24" height="24" fill="none" xmlns="http://www.w3.org/2000/svg"><path d="M20.293 2 3 21.707" stroke="#585C5E" stroke-width="2" stroke-linecap="round"/><path d="M8.286 18.875c0 .564-.226 1.104-.628 1.503a2.152 2.152 0 0 1-3.03 0 2.116 2.116 0 0 1 0-3.006 2.152 2.152 0 0 1 3.03 0c.402.399.628.94.628 1.503Zm0 0V6.479L19 4v12.396M8.286 10.73 19 8.25m0 8.5c0 .564-.226 1.104-.628 1.503a2.152 2.152 0 0 1-3.03 0 2.116 2.116 0 0 1 0-3.006 2.152 2.152 0 0 1 3.03 0c.402.399.628.94.628 1.503Z" stroke="#585C5E" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/></svg>

Before

Width:  |  Height:  |  Size: 586 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

View File

@ -1 +0,0 @@
<svg viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg"><g id="SVGRepo_bgCarrier" stroke-width="0"></g><g id="SVGRepo_tracerCarrier" stroke-linecap="round" stroke-linejoin="round"></g><g id="SVGRepo_iconCarrier"> <path d="M9 19C9 20.1046 7.65685 21 6 21C4.34315 21 3 20.1046 3 19C3 17.8954 4.34315 17 6 17C7.65685 17 9 17.8954 9 19ZM9 19V5L21 3V17M21 17C21 18.1046 19.6569 19 18 19C16.3431 19 15 18.1046 15 17C15 15.8954 16.3431 15 18 15C19.6569 15 21 15.8954 21 17ZM9 9L21 7" stroke="#000000" stroke-width="0.744" stroke-linecap="round" stroke-linejoin="round"></path> </g></svg>

Before

Width:  |  Height:  |  Size: 596 B

View File

@ -1,60 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg
viewBox="0 0 24 24"
fill="none"
version="1.1"
id="svg1"
sodipodi:docname="musicicon.svg"
inkscape:version="1.3.2 (091e20e, 2023-11-25)"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<defs
id="defs1" />
<sodipodi:namedview
id="namedview1"
pagecolor="#ffffff"
bordercolor="#111111"
borderopacity="1"
inkscape:showpageshadow="0"
inkscape:pageopacity="0"
inkscape:pagecheckerboard="1"
inkscape:deskcolor="#d1d1d1"
inkscape:zoom="9.8333333"
inkscape:cx="11.949153"
inkscape:cy="12"
inkscape:window-width="1200"
inkscape:window-height="853"
inkscape:window-x="3447"
inkscape:window-y="152"
inkscape:window-maximized="0"
inkscape:current-layer="SVGRepo_iconCarrier" />
<g
id="SVGRepo_bgCarrier"
stroke-width="0" />
<g
id="SVGRepo_tracerCarrier"
stroke-linecap="round"
stroke-linejoin="round" />
<g
id="SVGRepo_iconCarrier">
<path
d="m 9,19 c 0,1.1046 -1.34315,2 -3,2 -1.65685,0 -3,-0.8954 -3,-2 0,-1.1046 1.34315,-2 3,-2 1.65685,0 3,0.8954 3,2 z m 0,0 V 5 L 21,3 v 14 m 0,0 c 0,1.1046 -1.3431,2 -3,2 -1.6569,0 -3,-0.8954 -3,-2 0,-1.1046 1.3431,-2 3,-2 1.6569,0 3,0.8954 3,2 z M 9,9 21,7"
stroke="#000000"
stroke-width="0.744"
stroke-linecap="round"
stroke-linejoin="round"
id="path1"
sodipodi:nodetypes="sssssccccssssscc" />
<path
d="m 9,19 c 0,1.1046 -1.34315,2 -3,2 -1.65685,0 -3,-0.8954 -3,-2 0,-1.1046 1.34315,-2 3,-2 1.65685,0 3,0.8954 3,2 z m 0,0 V 5 L 21,3 v 14 m 0,0 c 0,1.1046 -1.3431,2 -3,2 -1.6569,0 -3,-0.8954 -3,-2 0,-1.1046 1.3431,-2 3,-2 1.6569,0 3,0.8954 3,2 z M 1.5762712,21.610169 C 8.752223,15.835661 7.66429,15.289776 23.440678,2.6271186"
stroke="#000000"
stroke-width="0.744"
stroke-linecap="round"
stroke-linejoin="round"
id="path2"
sodipodi:nodetypes="sssssccccssssscc"
style="stroke-width:1.044;stroke-dasharray:none" />
</g>
</svg>

Before

Width:  |  Height:  |  Size: 2.2 KiB

View File

@ -1 +0,0 @@
<svg width="85px" height="85px" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg" stroke="#000000"><g id="SVGRepo_bgCarrier" stroke-width="0"></g><g id="SVGRepo_tracerCarrier" stroke-linecap="round" stroke-linejoin="round"></g><g id="SVGRepo_iconCarrier"> <path d="M3 11V13M6 10V14M9 11V13M12 9V15M15 6V18M18 10V14M21 11V13" stroke="#000000" stroke-width="0.9120000000000001" stroke-linecap="round" stroke-linejoin="round"></path> </g></svg>

Before

Width:  |  Height:  |  Size: 458 B

View File

@ -1,62 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg
width="85px"
height="85px"
viewBox="0 0 24 24"
fill="none"
stroke="#000000"
version="1.1"
id="svg1"
sodipodi:docname="noiseicon_muted.svg"
inkscape:version="1.3.2 (091e20e, 2023-11-25)"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<defs
id="defs1" />
<sodipodi:namedview
id="namedview1"
pagecolor="#ffffff"
bordercolor="#111111"
borderopacity="1"
inkscape:showpageshadow="0"
inkscape:pageopacity="0"
inkscape:pagecheckerboard="1"
inkscape:deskcolor="#d1d1d1"
inkscape:zoom="2.7764706"
inkscape:cx="42.319915"
inkscape:cy="42.5"
inkscape:window-width="1200"
inkscape:window-height="449"
inkscape:window-x="3438"
inkscape:window-y="188"
inkscape:window-maximized="0"
inkscape:current-layer="SVGRepo_iconCarrier" />
<g
id="SVGRepo_bgCarrier"
stroke-width="0" />
<g
id="SVGRepo_tracerCarrier"
stroke-linecap="round"
stroke-linejoin="round" />
<g
id="SVGRepo_iconCarrier">
<path
d="m 3,11 v 2 m 3,-3 v 4 m 3,-3 v 2 m 3,-4 v 6 m 3,3 V 6 m 3,4 v 4 m 3,-3 v 2"
stroke="#000000"
stroke-width="0.912"
stroke-linecap="round"
stroke-linejoin="round"
id="path1"
sodipodi:nodetypes="cccccccccccccc" />
<path
d="m 3,11 v 2 m 3,-3 v 4 m 3,-3 v 2 m 3,-4 v 6 M 3,18.40678 22.322034,5.8983051 M 18,10 v 4 m 3,-3 v 2"
stroke="#000000"
stroke-width="0.912"
stroke-linecap="round"
stroke-linejoin="round"
id="path2"
sodipodi:nodetypes="cccccccccccccc" />
</g>
</svg>

Before

Width:  |  Height:  |  Size: 1.8 KiB

View File

@ -1,3 +0,0 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M2.75 9.93806V14.0621M6.45 6.84506V17.1551M10.15 3.23706V20.7631M13.85 7.13206V16.8681M17.55 9.56606V14.4341M21.25 10.9691V13.0311" stroke="#585C5E" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 327 B

View File

@ -1 +0,0 @@
<svg width="24" height="24" fill="none" xmlns="http://www.w3.org/2000/svg"><path d="M2.75 9.938v4.124m3.7-7.217v10.31m3.7-13.918v17.526m3.7-13.631v9.736m3.7-7.302v4.868m3.7-3.465v2.062" stroke="#585C5E" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/><path d="M20.293 2 3 21.707" stroke="#585C5E" stroke-width="2" stroke-linecap="round"/></svg>

Before

Width:  |  Height:  |  Size: 361 B

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 16 KiB

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1,55 +0,0 @@
Copyright (c) 2023 Cycling '74
The code that Max generates automatically and that end users are capable of
exporting and using, and any associated documentation files (the “Software”)
is a work of authorship for which Cycling '74 is the author and owner for
copyright purposes.
This Software is dual-licensed either under the terms of the Cycling '74
License for Max-Generated Code for Export, or alternatively under the terms
of the General Public License (GPL) Version 3. You may use the Software
according to either of these licenses as it is most appropriate for your
project on a case-by-case basis (proprietary or not).
A) Cycling '74 License for Max-Generated Code for Export
A license is hereby granted, free of charge, to any person obtaining a copy
of the Software (“Licensee”) to use, copy, modify, merge, publish, and
distribute copies of the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following conditions:
The Software is licensed to Licensee for all uses that do not include the sale,
sublicensing, or commercial distribution of software that incorporates this
source code. This means that the Licensee is free to use this software for
educational, research, and prototyping purposes, to create musical or other
creative works with software that incorporates this source code, or any other
use that does not constitute selling software that makes use of this source
code. Commercial distribution also includes the packaging of free software with
other paid software, hardware, or software-provided commercial services.
For entities with UNDER $200k in annual revenue or funding, a license is hereby
granted, free of charge, for the sale, sublicensing, or commercial distribution
of software that incorporates this source code, for as long as the entity's
annual revenue remains below $200k annual revenue or funding.
For entities with OVER $200k in annual revenue or funding interested in the
sale, sublicensing, or commercial distribution of software that incorporates
this source code, please send inquiries to licensing@cycling74.com.
The above copyright notice and this license shall be included in all copies or
substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Please see
https://support.cycling74.com/hc/en-us/articles/10730637742483-RNBO-Export-Licensing-FAQ
for additional information
B) General Public License Version 3 (GPLv3)
Details of the GPLv3 license can be found at: https://www.gnu.org/licenses/gpl-3.0.html

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1,55 +0,0 @@
Copyright (c) 2023 Cycling '74
The code that Max generates automatically and that end users are capable of
exporting and using, and any associated documentation files (the “Software”)
is a work of authorship for which Cycling '74 is the author and owner for
copyright purposes.
This Software is dual-licensed either under the terms of the Cycling '74
License for Max-Generated Code for Export, or alternatively under the terms
of the General Public License (GPL) Version 3. You may use the Software
according to either of these licenses as it is most appropriate for your
project on a case-by-case basis (proprietary or not).
A) Cycling '74 License for Max-Generated Code for Export
A license is hereby granted, free of charge, to any person obtaining a copy
of the Software (“Licensee”) to use, copy, modify, merge, publish, and
distribute copies of the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following conditions:
The Software is licensed to Licensee for all uses that do not include the sale,
sublicensing, or commercial distribution of software that incorporates this
source code. This means that the Licensee is free to use this software for
educational, research, and prototyping purposes, to create musical or other
creative works with software that incorporates this source code, or any other
use that does not constitute selling software that makes use of this source
code. Commercial distribution also includes the packaging of free software with
other paid software, hardware, or software-provided commercial services.
For entities with UNDER $200k in annual revenue or funding, a license is hereby
granted, free of charge, for the sale, sublicensing, or commercial distribution
of software that incorporates this source code, for as long as the entity's
annual revenue remains below $200k annual revenue or funding.
For entities with OVER $200k in annual revenue or funding interested in the
sale, sublicensing, or commercial distribution of software that incorporates
this source code, please send inquiries to licensing@cycling74.com.
The above copyright notice and this license shall be included in all copies or
substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Please see
https://support.cycling74.com/hc/en-us/articles/10730637742483-RNBO-Export-Licensing-FAQ
for additional information
B) General Public License Version 3 (GPLv3)
Details of the GPLv3 license can be found at: https://www.gnu.org/licenses/gpl-3.0.html

View File

@ -1,38 +0,0 @@
export default auth = {
"status": "success",
"user": {
"id": 2,
"name": null,
"first_name": "Robert",
"surname": "Rapp",
"email": "robbi@happy-rapp.de",
"email_verified_at": null,
"type": null,
"avatar": null,
"blocked_at": null,
"created_at": "2024-02-17T05:02:18.000000Z",
"updated_at": "2024-02-17T05:02:18.000000Z",
"stripe_id": null,
"pm_type": null,
"pm_last_four": null,
"trial_ends_at": null,
"language": "en",
"subscriptions": [],
"settings": {
"id": 2,
"user_id": 2,
"headphone_type": "Over-ear",
"anc_type": "No",
"plan_today": null,
"soundscape": "x",
"status": null,
"adaptive_sound_scape": "yes",
"created_at": "2024-02-17T05:02:24.000000Z",
"updated_at": "2024-03-13T11:59:30.000000Z"
}
},
"authorisation": {
"token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwOi8vYi5taW5kYm9vc3QudGVhbS9hcGkvbG9naW4iLCJpYXQiOjE3MTAzNDY2NzQsImV4cCI6MTcxMDM4MjY3NCwibmJmIjoxNzEwMzQ2Njc0LCJqdGkiOiJKUUs3OHFzTEpNQkJwWERzIiwic3ViIjoiMiIsInBydiI6IjIzYmQ1Yzg5NDlmNjAwYWRiMzllNzAxYzQwMDg3MmRiN2E1OTc2ZjcifQ.Z7siezlXekT2GaOa4UDK46-xEp411714iAB8wClxA48",
"type": "bearer"
}
}

View File

@ -1,31 +0,0 @@
export default user = {
id: 2,
name: null,
first_name: 'Robert',
surname: 'Rapp',
email: 'robbi@happy-rapp.de',
email_verified_at: null,
type: null,
avatar: null,
blocked_at: null,
created_at: '2024-02-17T05:02:18.000000Z',
updated_at: '2024-02-17T05:02:18.000000Z',
stripe_id: null,
pm_type: null,
pm_last_four: null,
trial_ends_at: null,
language: 'en',
subscriptions: [],
settings: {
id: 2,
user_id: 2,
headphone_type: 'Over-ear',
anc_type: 'No',
plan_today: null,
soundscape: 'x',
status: null,
adaptive_sound_scape: 'yes',
created_at: '2024-02-17T05:02:24.000000Z',
updated_at: '2024-03-13T11:59:30.000000Z'
}
}

Binary file not shown.

Binary file not shown.

View File

@ -1,26 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIEUDCCArigAwIBAgIRAOt7j272m9MstVCvadgqKqYwDQYJKoZIhvcNAQELBQAw
gYsxHjAcBgNVBAoTFW1rY2VydCBkZXZlbG9wbWVudCBDQTEwMC4GA1UECwwncm9i
ZXJ0cmFwcEBtaW5kYm9vc3QudGVhbSAoUm9iZXJ0IFJhcHApMTcwNQYDVQQDDC5t
a2NlcnQgcm9iZXJ0cmFwcEBtaW5kYm9vc3QudGVhbSAoUm9iZXJ0IFJhcHApMB4X
DTI1MDUxNTE0MDAyNloXDTI3MDgxNTE0MDAyNlowWzEnMCUGA1UEChMebWtjZXJ0
IGRldmVsb3BtZW50IGNlcnRpZmljYXRlMTAwLgYDVQQLDCdyb2JlcnRyYXBwQG1p
bmRib29zdC50ZWFtIChSb2JlcnQgUmFwcCkwggEiMA0GCSqGSIb3DQEBAQUAA4IB
DwAwggEKAoIBAQDFRWTWjoLcNmSVt1Ml/Gweaqp4fnuxKr5T+12+Qju3pImmVvcx
gypxjDmn1DeX6dQhE7YTZTEsCSdOIvo56Gwu+AoaaTzmqetsqoieP8o5Ti7i4D+L
bdHYNrcMaejjyBs5B9DawDUmhKkLGIBMLeVCYsWsQ23VJgb2R1+T+VNlWps6C/ca
7RMzyhVZkBi6iyPxNn7AaPzf27xpgP6IqtPcaxEzQg0K7dIu8lVCMP204qMZt933
xqIusPIYJMylx+xuuWfaP0+ZhLOCBI72iE61pgDXGnsOsLrZ5Lq2hdInHSpkbE4y
yD9VQnBIc9+EBFV+podXeEIqigSIjOrLPM/fAgMBAAGjXjBcMA4GA1UdDwEB/wQE
AwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATAfBgNVHSMEGDAWgBQ6oRmoLK8yrb2e
+PzlNKkB/m4phjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZIhvcNAQELBQAD
ggGBAFqERegob0tmUZWgSenzlCMWO1sjNzGNBlpywU6+IFV598YIRe9NvVHhynmq
ToOW3G39ZZ2qR2rnmpwQkJtxhQHI4FX7V1qyJJKbPb2YP0LiiV6iG5SvQHqxHxNO
eoQtT7heZ32sZKJyVWANanxTPXI+tyhfs1CMGG8spC8N5IYrD00+er5TJ3rm0juv
MVXHvDi8jgNUP+LTy5R1eo90cTcs+P4SkmVRR6qN3AUH5nEINJM5U4oFjQaT2q92
u3cbuGqB+ak4S8M/XQz6wfCOZ5O6xN8vqM8bJWACA2L9lbx5iuJ8OYlgNi3bS6cI
fwKZs/Q/H67t1dNbYRbRWvvwd2pT9p2vG34+IHIyYiUrTp6MsxyW/mkqHjoKBN3E
7Sc9m6X8hT0LNViDdGF2MB/36keI7Ik6KliVjCo6cil/qbNgEM++cDeiV7z6c9Pk
/pJaJWIInQfpbdGB8odHggYSFLhs1CnSAdJbvT+uvVMzWRGmeJy0RXZoCpy5NLqJ
qJheoQ==
-----END CERTIFICATE-----

Some files were not shown because too many files have changed in this diff Show More