<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ranjith Hegde</title>
    <description>The latest articles on Forem by Ranjith Hegde (@ranjithshegde).</description>
    <link>https://forem.com/ranjithshegde</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ranjithshegde"/>
    <language>en</language>
    <item>
      <title>The Matrix and the Mesh</title>
      <dc:creator>Ranjith Hegde</dc:creator>
      <pubDate>Sat, 28 Mar 2026 04:30:20 +0000</pubDate>
      <link>https://forem.com/ranjithshegde/the-matrix-and-the-mesh-3m5o</link>
      <guid>https://forem.com/ranjithshegde/the-matrix-and-the-mesh-3m5o</guid>
      <description>&lt;h2&gt;
  
  
  A matrix is not a camera
&lt;/h2&gt;

&lt;p&gt;Vulkan does not have a camera. It has a push constant slot. If you put a matrix there, the vertex shader can read it. If you do not, nothing breaks. The geometry renders without a transform. That is the complete contract.&lt;/p&gt;

&lt;p&gt;MayaFlux exposes this as-is. &lt;code&gt;ViewTransform&lt;/code&gt; is two &lt;code&gt;glm::mat4&lt;/code&gt; fields, 128 bytes, exactly the Vulkan minimum push constant size. &lt;code&gt;set_view_transform&lt;/code&gt; uploads it once. &lt;code&gt;set_view_transform_source&lt;/code&gt; takes a &lt;code&gt;std::function&amp;lt;ViewTransform()&amp;gt;&lt;/code&gt; and calls it every frame. There is no camera object, no scene, no actor hierarchy, no transform component.&lt;/p&gt;

&lt;p&gt;This is not a missing feature. It is a deliberate refusal to name something that does not need a name.&lt;/p&gt;

&lt;p&gt;The first example, &lt;code&gt;compose_rhythm_viewport&lt;/code&gt;, runs a four-voice drum pattern where some hits are deliberately arhythmic. The kick does not always land on the beat. This is not a bug being tolerated. It is the same logic as the camera angles: if you have decided that standard angles are not interesting, the same reasoning applies to standard grids. Asymmetry is not something to sanitize out before the work is presentable. It is part of the material.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/ru7deXMx1pY"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Blender understood something important: once a camera has keyframes, it becomes a compositional object. You can record into it, drive it from constraints, run it on a path, oscillate it with a noise modifier. The stated intent may just be animation, but the structure that makes animation possible also makes everything else possible. A camera with keyframes is already most of the way to being a signal.&lt;/p&gt;

&lt;p&gt;MayaFlux takes the remaining step. &lt;code&gt;set_view_transform_source&lt;/code&gt; accepts any callable that returns a matrix pair. In &lt;code&gt;compose_resonant_orbit&lt;/code&gt;, five formant resonator outputs compute azimuth, elevation, radius, field of view, and roll every frame, directly from live audio node output. In &lt;code&gt;compose_rhythm_viewport&lt;/code&gt;, drum hits accumulate velocity on independent axes with independent decay rates. The viewpoint lurches, spins, zooms, and rolls because audio events fire and the math follows. There is no camera being "controlled by audio." There is a function from numbers to a matrix, called every frame, and the numbers happen to come from a synthesis network.&lt;/p&gt;


&lt;div&gt;
    &lt;iframe src="https://www.youtube.com/embed/zNQvUIvgRo8"&gt;
    &lt;/iframe&gt;
  &lt;/div&gt;


&lt;p&gt;The deeper consequence is not yet fully showcased, but is already functional: the &lt;code&gt;Tendency&amp;lt;D,R&amp;gt;&lt;/code&gt; system. A &lt;code&gt;Tendency&lt;/code&gt; is a stateless callable from domain &lt;code&gt;D&lt;/code&gt; to range &lt;code&gt;R&lt;/code&gt;. The relationship between what another system might call "world space" and "object space" is, in MayaFlux, just a &lt;code&gt;Tendency&amp;lt;glm::vec3, glm::vec3&amp;gt;&lt;/code&gt;. It can be composed, chained, scaled, and combined with other tendencies using free functions. It can be driven by audio node output captured in the lambda. It can change every frame.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// A spatial field that pulls positions toward an audio-driven attractor.&lt;/span&gt;
&lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;attractor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Kinesis&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;VectorField&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;envelope&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;glm&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;vec3&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;glm&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;vec3&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="kt"&gt;float&lt;/span&gt; &lt;span class="n"&gt;strength&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;static_cast&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;float&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;envelope&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;get_last_output&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;4.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;glm&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;normalize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;strength&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="n"&gt;field_op&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;FieldTarget&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;POSITION&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;attractor&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The "object-to-world" relationship is &lt;code&gt;fn&lt;/code&gt;. It can be anything. There is no static projection matrix being dressed up as scene freedom. The function is the relationship, and the function is live.&lt;/p&gt;




&lt;h2&gt;
  
  
  Mesh is two spans
&lt;/h2&gt;

&lt;p&gt;A loaded FBX, a procedurally generated sphere, a topology rebuilt from audio analysis thresholds, and a deforming surface driven by per-mode resonator amplitude all share the same representation in MayaFlux: a &lt;code&gt;vector&amp;lt;uint8_t&amp;gt;&lt;/code&gt; of interleaved vertex bytes and a &lt;code&gt;vector&amp;lt;uint32_t&amp;gt;&lt;/code&gt; of triangle indices, described by a &lt;code&gt;VertexLayout&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;MeshAccess&lt;/code&gt; is a non-owning view over those two spans. It carries a raw pointer into the vertex bytes, a pointer into the index array, a layout descriptor, and an optional &lt;code&gt;RegionGroup&lt;/code&gt; for submesh structure. It owns nothing. It copies nothing. The accessor pattern is identical to every other NDData type in the system: &lt;code&gt;VertexAccess&lt;/code&gt;, &lt;code&gt;TextureAccess&lt;/code&gt;, &lt;code&gt;EigenAccess&lt;/code&gt; all work the same way. Mesh is not a special case with its own access conventions.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;MeshInsertion&lt;/code&gt; is the write counterpart. It holds mutable references to the two storage variants and populates them through typed spans. Submeshes are accumulated via &lt;code&gt;insert_submesh()&lt;/code&gt;, with each batch's index range recorded as a &lt;code&gt;Region&lt;/code&gt; in a &lt;code&gt;RegionGroup&lt;/code&gt; named "submeshes". The coordinate convention is the same one used for audio transient regions, video frame regions, and every other Region in the system. A submesh boundary is a region in index space, the same as a transient is a region in sample space.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="n"&gt;MeshInsertion&lt;/span&gt; &lt;span class="nf"&gt;ins&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mesh_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;vertex_variant&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mesh_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;index_variant&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;auto&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;sub&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;meshes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;ins&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;insert_submesh&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sub&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;vertex_bytes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sub&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;indices&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                       &lt;span class="n"&gt;sub&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sub&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;material_name&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;access&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ins&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;build&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The consequence is that mesh data is immediately legible to any system that operates on NDData. Yantra's analytical operators, Kinesis transform primitives, grammar-based sequence processors, and any offline operation that accepts a &lt;code&gt;DataVariant&lt;/code&gt; can be applied to vertex bytes or index data without a conversion step, a mesh-specific API, or a special-case code path. The vertex buffer is a span of bytes with a layout descriptor. So is an audio buffer. So is a texture. The operations that work on one class of data work on all of them.&lt;/p&gt;

&lt;p&gt;The vertex and index dirty flags are independent. &lt;code&gt;MeshWriterNode&lt;/code&gt; exposes &lt;code&gt;set_mesh_vertices()&lt;/code&gt; and &lt;code&gt;set_mesh_indices()&lt;/code&gt; separately. In &lt;code&gt;compose_drone_disintegration&lt;/code&gt;, vertex positions are rewritten every frame at 60 Hz while the index buffer is only rebuilt when audio amplitude crosses a threshold. The GPU sees two independent upload paths. The surface deforms continuously and tears structurally at discrete moments driven by different aspects of the same audio signal. These are different operations on different data streams that happen to share the same rendered output.&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;MeshWriterNode&lt;/code&gt; does not know whether the bytes it receives came from deforming a previous frame, from &lt;code&gt;MeshInsertion&lt;/code&gt; processing assimp submeshes, from a Yantra operator applying a &lt;code&gt;Tendency&lt;/code&gt; field, or from a generative algorithm producing topology from scratch. It receives spans, marks dirty flags, and uploads on the next cycle. The provenance is irrelevant to the upload path, which means any source that can produce the right byte layout can drive the geometry.&lt;/p&gt;

&lt;p&gt;This is what it means for mesh to not be a special case: not that it is treated carelessly, but that it participates fully in the same computational substrate as everything else. The same mathematical infrastructure that shapes audio signals shapes geometry. The operations are not "audio operations adapted for mesh" or "mesh operations adapted for audio." They are operations on numbers, applied to whichever domain they are pointed at.&lt;/p&gt;

</description>
      <category>vulkan</category>
      <category>creativecoding</category>
      <category>audioprogramming</category>
      <category>cpp</category>
    </item>
    <item>
      <title>MayaFlux 0.1.0: A Digital-Native Substrate for Multimedia Computation</title>
      <dc:creator>Ranjith Hegde</dc:creator>
      <pubDate>Thu, 01 Jan 2026 18:33:11 +0000</pubDate>
      <link>https://forem.com/ranjithshegde/mayaflux-010-infrastructure-for-digital-creativity-13e5</link>
      <guid>https://forem.com/ranjithshegde/mayaflux-010-infrastructure-for-digital-creativity-13e5</guid>
      <description>&lt;p&gt;MayaFlux 0.1.0 is now available. This is not another creative coding framework. This is computational infrastructure built from 15 years of interdisciplinary performance practice, pedagogy, research and production dsp engineering.&lt;br&gt;
It treats audio, visual, and control data as unified numerical streams processed through lock-free computation graphs, C++20 coroutines for temporal coordination, and complete LLVM JIT compilation for live coding.&lt;/p&gt;
&lt;h2&gt;
  
  
  What MayaFlux Is
&lt;/h2&gt;

&lt;p&gt;MayaFlux is a C++20/23 multimedia computation framework that rejects some fundamental assumptions of existing tools.&lt;br&gt;
Three design principles distinguish it from existing tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Analog synthesis metaphors have no place in digital systems.&lt;/strong&gt; Oscillators, patch cables, and envelope generators are pedagogical crutches borrowed from hardware that never constrained digital computation. MayaFlux embraces recursion, look-ahead processing, arbitrary precision, cross-domain data sharing, and computational patterns that have no analog equivalent. Polynomials sculpt data. Logic gates make creative decisions. Coroutines coordinate time itself.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Audio, visual, and control processing are artificially separated.&lt;/strong&gt; These boundaries exist for commercial tool design, not computational necessity. In MayaFlux, a single unit outputs to audio channels, triggers GPU compute shaders, and coordinates temporal events simultaneously, in the same processing callback. Data flows between domains without conversion overhead because samples, pixels, and parameters are all double-precision floating point.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tools that hide complexity also hide possibility.&lt;/strong&gt; MayaFlux provides hooks everywhere. Replace the audio callback. Intercept buffer processing. Override channel coordination. Substitute backends. Every layer is replaceable, every system is overridable. If you understand the implications, you can modify anything. If you don't, the documentation teaches you through working code examples that produce real sound within minutes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  What MayaFlux Is Not
&lt;/h2&gt;

&lt;p&gt;MayaFlux is infrastructure, not application software:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not a DAW.&lt;/strong&gt; No timeline editor, no MIDI piano roll, no plugin hosting. MayaFlux provides computational substrate. Build your own sequencing logic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not a node-based UI.&lt;/strong&gt; No visual patching interface. Everything is C++ code. Text is more precise for complex logic. Your patches are version-controlled source files, not opaque binaries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not consumption-oriented software&lt;/strong&gt;&lt;br&gt;
MayaFlux assumes an active relationship with computation. It rewards curiosity, modification, and experimentation rather than menu navigation or preset selection.&lt;br&gt;
Users do not need deep systems programming knowledge, but they do need a willingness to think in terms of data, processes, and structure. The framework teaches these ideas through fluent APIs and runnable examples.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not a replacement for Max/P5.js yet.&lt;/strong&gt; Those tools excel at rapid prototyping and visual exploration. MayaFlux excels at computational depth and architectural control. Eventually, yes. Right now, different purposes.&lt;/p&gt;
&lt;h2&gt;
  
  
  Who Should Use This
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Creative technologists hitting tool limits.&lt;/strong&gt; If you've prototyped in Processing but need real-time audio, mastered Max/MSP but want programmatic control, or built installations in openFrameworks then watched Apple deprecate OpenGL, MayaFlux is infrastructure built from frustration with those limitations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visual artists and installation makers needing computational depth&lt;/strong&gt;. If you've built generative visuals in Processing or TouchDesigner but want low-level GPU control without OpenGL's deprecated patterns, or need audio and visuals truly synchronized rather than awkwardly bridged, MayaFlux treats graphics processing with the same architectural rigor as audio DSP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Researchers needing genuine flexibility.&lt;/strong&gt; Academic audio research shouldn't require fighting commercial tools to implement novel algorithms. MayaFlux provides direct buffer access, arbitrary processing rates, cross-domain coordination. Research shouldn't require reverse-engineering closed systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Musicians and composers outgrowing presets.&lt;/strong&gt; If you've exhausted existing tools and want instruments matching your musical imagination rather than vendor roadmaps, MayaFlux treats synthesis as data transformation you control at every sample.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developers escaping framework constraints.&lt;/strong&gt; Game audio middleware, creative coding libraries, visual programming environments all impose architectural boundaries. MayaFlux removes them while maintaining performance guarantees through lock-free coordination and deterministic processing.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Technical Reality
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Lock-Free Concurrent Processing
&lt;/h3&gt;

&lt;p&gt;Every node, buffer, and network uses lock-free atomic coordination through C++20's &lt;code&gt;atomic_ref&lt;/code&gt;, compare-exchange operations, and explicit memory ordering. You add oscillators, connect filters, restructure graphs while audio plays with no glitches, no dropouts, no mutex contention. Maximum latency for any modification: one buffer cycle (typically 10-20ms).&lt;/p&gt;

&lt;p&gt;This isn't careful, sparse or scoped locking. It's no locks in the processing path. Pending operations queue atomically. Channel coordination uses bitmask CAS patterns. Cross-domain transfers happen through processing handles with token validation.&lt;/p&gt;
&lt;h3&gt;
  
  
  The State Promise
&lt;/h3&gt;

&lt;p&gt;Computational units process exactly once per cycle regardless of consumers. A spectral transform feeding both granular synthesis and texture generation processes once, both domains receive synchronized output. Atomic state flags prevent reprocessing. Reference counting coordinates reset. Channel bitmasks handle multi-output scenarios.&lt;/p&gt;

&lt;p&gt;This eliminates the traditional separation between audio rate and control rate. Rate is just a processing token (&lt;code&gt;AUDIO_RATE&lt;/code&gt;, &lt;code&gt;VISUAL_RATE&lt;/code&gt;, &lt;code&gt;CUSTOM_RATE&lt;/code&gt;) that tells the engine calling frequency.&lt;/p&gt;
&lt;h3&gt;
  
  
  Unified Data Primitives
&lt;/h3&gt;

&lt;p&gt;Audio samples are numbers. Pixel values are numbers. Control parameters are numbers. Time is numbers. No conversion overhead. No semantic boundaries. A visual analysis routine directly modulates synthesis parameters. A recursive audio filter drives texture coordinates. The same &lt;code&gt;RootBuffer&lt;/code&gt; pattern works for &lt;code&gt;RootAudioBuffer&lt;/code&gt; and &lt;code&gt;RootGraphicsBuffer&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Graphics as First-Class Computation
&lt;/h3&gt;

&lt;p&gt;Vulkan integration isn't an afterthought or "audio visualization". The graphics pipeline runs on identical infrastructure: lock-free buffer coordination, token-based domain composition, unified data flow. Particle systems, geometry generation, shader bindings, all use the same Node/Buffer/Processor architecture as audio. A polynomial node can drive vertex displacement as naturally as it drives waveshaping. This is computation substrate, not an audio library with graphics bolted on.&lt;/p&gt;
&lt;h3&gt;
  
  
  C++20 Coroutines as Temporal Material
&lt;/h3&gt;

&lt;p&gt;Time becomes compositional material through first-class coroutine support:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;sync_routine&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[](&lt;/span&gt;&lt;span class="n"&gt;Vruta&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;TaskScheduler&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;scheduler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Vruta&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;SoundRoutine&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;co_await&lt;/span&gt; &lt;span class="n"&gt;Kriya&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;SampleDelay&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;scheduler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;seconds_to_samples&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.02&lt;/span&gt;&lt;span class="p"&gt;)};&lt;/span&gt;
        &lt;span class="n"&gt;process_audio_frame&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

        &lt;span class="k"&gt;co_await&lt;/span&gt; &lt;span class="n"&gt;Kriya&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;MultiRateDelay&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;samples_to_wait&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scheduler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;seconds_to_samples&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;frames_to_wait&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;6&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;
        &lt;span class="n"&gt;bind_push_constants&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;some_audio_matrix&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Coroutines suspend on sample counts, buffer boundaries, or arbitrary predicates. Temporal logic reads like the musical idea. No callback hell. No message-passing complexity.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Lila JIT System
&lt;/h3&gt;

&lt;p&gt;MayaFlux includes complete LLVM-based JIT compilation. Write C++ code, hit evaluate, hear/see results within one buffer cycle. No compilation step. No application restart. No workflow interruption. Full C++20 syntax, template metaprogramming, compile-time evaluation. Live coding shouldn't mean switching to a simpler language.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;Processing happens through &lt;strong&gt;Domains&lt;/strong&gt; combining Node tokens (rate), Buffer tokens (backend), and Task tokens (coordination):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="n"&gt;Domain&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;AUDIO&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Nodes&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ProcessingToken&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;AUDIO_RATE&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
                &lt;span class="n"&gt;Buffers&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ProcessingToken&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;AUDIO_BACKEND&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
                &lt;span class="n"&gt;Vruta&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ProcessingToken&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;SAMPLE_ACCURATE&lt;/span&gt;

&lt;span class="n"&gt;Domain&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;GRAPHICS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Nodes&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ProcessingToken&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;VISUAL_RATE&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
                   &lt;span class="n"&gt;Buffers&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ProcessingToken&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;GRAPHICS_BACKEND&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
                   &lt;span class="n"&gt;Vruta&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ProcessingToken&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;FRAME_ACCURATE&lt;/span&gt;

&lt;span class="c1"&gt;// Custom user example&lt;/span&gt;
&lt;span class="n"&gt;Domain&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;PARALLEL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Nodes&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ProcessingToken&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;AUDIO_RATE&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
                   &lt;span class="n"&gt;Buffers&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ProcessingToken&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;AUDIO_PARALLEL&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="c1"&gt;// Executes on the GPU&lt;/span&gt;
                   &lt;span class="n"&gt;Vruta&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ProcessingToken&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;SAMPLE_ACCURATE&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Custom domains compose individual tokens for specialized requirements. Cross-modal coordination happens naturally through token compatibility rules enforced at registration, not during hot path execution.&lt;/p&gt;

&lt;p&gt;Buffers own processing chains. Chains execute processors sequentially. Processors transform data through mathematical expressions, logic operations, or custom functions. Everything composes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;compose&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;sine&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vega&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;440.0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;node_buffer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vega&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NodeBuffer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sine&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Audio&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;distortion&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vega&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Polynomial&lt;/span&gt;&lt;span class="p"&gt;([](&lt;/span&gt;&lt;span class="kt"&gt;double&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;tanh&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;2.0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="n"&gt;MayaFlux&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;create_processor&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;PolynomialProcessor&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;node_buffer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;distortion&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The substrate doesn't change. Your access to it deepens.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Available Now
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Platform support:&lt;/strong&gt; Windows (MSVC/Clang), macOS (Clang), Linux (GCC/Clang). Distributed through GitHub releases, Launchpad PPA (Ubuntu/Debian), COPR (Fedora/RHEL), AUR (Arch).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project management:&lt;/strong&gt; Weave command-line tool handles automated dependency management, MayaFlux version acquisition and installation, and C++ project generation with autogenerated CMake configuration loading MayaFlux library and all necessary includes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audio backend:&lt;/strong&gt; RtAudio with WASAPI (Windows), CoreAudio (macOS), ALSA/PulseAudio/JACK (Linux).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Graphics backend:&lt;/strong&gt; Vulkan 1.3 with complete pipeline from initialization dynamic rendering, command buffer management, swapchain presentation. Currently supports 2D particle systems, geometry networks, shader bindings with node data via push constants and descriptors. Foundation for procedural animation, generative visuals, and computational geometry&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Live coding:&lt;/strong&gt; Lila JIT system with LLVM 21+ supporting full C++ syntax including templates, constexpr evaluation, and incremental compilation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Temporal coordination:&lt;/strong&gt; Complete coroutine infrastructure with sample-accurate scheduling, frame-accurate synchronization, multi-rate adaptation, and event-driven execution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation:&lt;/strong&gt; Comprehensive tutorials starting from "load a file" building to complete pipeline architectures. Technical blog series covering lock-free architecture and state coordination patterns. Persona(musician, visual artist, etc...) based onboarding addressing mental-model transitions from Pure Data, Max/MSP, SuperCollider, p5.js, openFrameworks, Processing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing:&lt;/strong&gt; Over 700 component tests validating lock-free patterns, buffer processing, node coordination, graphics pipeline integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Working Examples
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Load and process audio:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;compose&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;sound&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vega&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;read&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"path/to/file.wav"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Audio&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;buffers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;MayaFlux&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;get_last_created_container_buffers&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;poly&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vega&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Polynomial&lt;/span&gt;&lt;span class="p"&gt;([](&lt;/span&gt;&lt;span class="kt"&gt;double&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="n"&gt;MayaFlux&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;create_processor&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;PolynomialProcessor&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buffers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;poly&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Build processing chains:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;compose&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;sound&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vega&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;read&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"drums.wav"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Audio&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;buffers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;MayaFlux&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;get_last_created_container_buffers&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;bitcrush&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vega&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Logic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LogicOperator&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;THRESHOLD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;crush_proc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;MayaFlux&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;create_processor&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;LogicProcessor&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buffers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;bitcrush&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;crush_proc&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;set_modulation_type&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LogicProcessor&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ModulationType&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;REPLACE&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;clock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vega&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;4.0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;freeze_logic&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vega&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Logic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LogicOperator&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;THRESHOLD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;freeze_logic&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;set_input_node&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;clock&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;freeze_proc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;MayaFlux&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;create_processor&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;LogicProcessor&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buffers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;freeze_logic&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;freeze_proc&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;set_modulation_type&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LogicProcessor&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ModulationType&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;HOLD_ON_FALSE&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Recursive filters:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vega&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Polynomial&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;[](&lt;/span&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;deque&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;double&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;history&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mf"&gt;0.996&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;history&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;history&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mf"&gt;2.0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;PolynomialMode&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;RECURSIVE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="mi"&gt;100&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;set_initial_conditions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;vector&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;double&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;vega&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Random&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="p"&gt;)));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Audio-visual coordination:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;control&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vega&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Phasor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.15&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Audio&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;control&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;enable_mock_process&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;particles&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vega&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ParticleNetwork&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="mi"&gt;600&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;glm&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;vec3&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;2.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.5&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;glm&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;vec3&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;2.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;1.5&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;ParticleNetwork&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;InitializationMode&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;GRID&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;Graphics&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="n"&gt;particles&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;map_parameter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"turbulence"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;control&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;NodeNetwork&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;MappingMode&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;BROADCAST&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What Comes Next
&lt;/h2&gt;

&lt;p&gt;This release establishes the foundation. Future development includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Expanded graphics capabilities moving toward 3D rendering, input handling systems, networking components for distributed processing, hardware acceleration through CUDA and FPGA implementations. WebAssembly builds enabling interactive web demos running actual MayaFlux C++ code in browsers.&lt;/li&gt;
&lt;li&gt;Additional backends: JACK audio, multiple Vulkan extensions, custom backend interfaces for embedded systems or specialized hardware.&lt;/li&gt;
&lt;li&gt;Educational content: video walkthroughs, interactive examples, pattern libraries demonstrating specific creative techniques.&lt;/li&gt;
&lt;li&gt;Institutional partnerships exploring funding for full-time development, hardware integration research, academic collaboration on novel algorithms.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Get Started
&lt;/h2&gt;

&lt;p&gt;Visit &lt;a href="https://mayaflux.org" rel="noopener noreferrer"&gt;https://mayaflux.org&lt;/a&gt; for documentation, tutorials, and downloads.&lt;/p&gt;

&lt;p&gt;Installation takes minutes. First working audio: under five minutes following Sculpting Data tutorial.&lt;/p&gt;

&lt;p&gt;The framework provides default automation for common workflows while enabling complete override for specialized requirements. Start with simple patterns. Progress to architectural customization when needed. The documentation meets you where you are.&lt;/p&gt;

&lt;p&gt;Source: &lt;a href="https://github.com/mayaflux/MayaFlux" rel="noopener noreferrer"&gt;https://github.com/mayaflux/MayaFlux&lt;/a&gt;&lt;br&gt;&lt;br&gt;
License: GPL-3.0&lt;br&gt;&lt;br&gt;
Contact: [&lt;a href="mailto:mayafluxcollective@proton.me"&gt;mayafluxcollective@proton.me&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;MayaFlux exists because computational substrate evolved while creative tools maintained backward compatibility with 1980s architectures. New paradigms become possible when you're not constrained by analog metaphors, disciplinary separation, or protective abstraction layers.&lt;/p&gt;

&lt;p&gt;The substrate is ready. Build what you imagine.&lt;/p&gt;

</description>
      <category>cpp</category>
      <category>multiplatform</category>
      <category>graphic</category>
      <category>music</category>
    </item>
  </channel>
</rss>
