One Two Three Movie Streaming

0 views
Skip to first unread message

Roshan Fried

unread,
Aug 5, 2024, 1:26:51 AM8/5/24
to nerkacorpe
Inother words: You do not load the entire model and then parse/display it like with common 3D formats. Instead, you stream the data, parse single chunks and progressively increase the resolution of the model to the highest level.

Progressive Mesh Streaming seems to be a very popular topic in the scientific community. But I found no references about its practicability in real products so far. Does anyone have experience in the usage of Progressive Mesh Streaming in context of WebGL? Besides, could Progressive Mesh Streaming be potentially interesting/useful for your projects?


Unfortunately, the thesis is in German and I did not have the time so far to translate certain parts into English. Sorry for this! Try to do this in the future. But since you guys (@VisCIrcle) are from Germany, it should be at least no problem for you^^.


For Tesseract i use progressive asset streaming for the most content, the larger the scene is and the more different assets can be managed spatially, the more it becomes useful to dynamically load/unload them. Impostors (billboard stripes) are loaded first as well as proxy models (lowest possible poly representation, basically LOD geometries). The system optionally uses a websocket connection intead single HTTP requests what makes a quiet big difference for a lot files.


@Mugen87 do we have any update on this? My projects needs it desperately as the scenes that I load generally are really heavy even after optimizing it as much as I can - rendering it in chunks can solve my problem to a great extent.


Is there a method I can use to progressively load/stream/increase level of detail (not sure of the correct terminology) as the model loads so that my users aren't staring at a blank screen for minutes before anything appears?


I hope one day we'll have nice POP buffer (or similar alternative) progressive loading for three.js too -- is in my/our todo / wish list too but many things still before that here .. sure don't mind if someone writes it earlier :)


We've done that in one project, works fine as expected. Only downside I can figure is that it increases the total time to get the high version. But the low can be very small so it's ok (in our case low poly untextured with just vert colours, then the high ones have much more polys but essentially quite big textures too).


On basis of GLTF format, it transmits single mesh one by one. In a certain sense it reduces the waiting time for the first renderring. Actually if you rearrange mesh data and use streaming buffer to download data, you might achieve a result of prograsive loading/rendering.


Streaming computations are by nature long-running, and their workloads can change in unpredictable ways. This in turn means that maintaining performance may require dynamically scaling allocated computational resources. Some modern large-scale stream processors allow dynamic scaling but typically leave the difficult task of deciding how much to scale to the user. The process is cumbersome, slow and often inefficient. Where automatic scaling is supported, policies rely on coarse-grained metrics like observed throughput, backpressure, and CPU utilization. As a result, they tend to show incorrect provisioning, oscillations, and long convergence times. We present DS2, an automatic scaling controller for such systems which combines a general performance model of streaming dataflows with lightweight instrumentation to estimate the true processing and output rates of individual dataflow operators. We apply DS2 on Apache Flink and Timely Dataflow and demonstrate its accuracy and fast convergence. When compared to Dhalion, the state-of-the-art technique in Heron, DS2 converges to the optimal, backpressure-free configuration in a single step instead of six.


USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.


Although we endeavor to make our web sites work with a wide variety of browsers, we can only support browsers that provide sufficiently modern support for web standards. Thus, this site requires the use of reasonably up-to-date versions of Google Chrome, FireFox, Internet Explorer (IE 9 or greater), or Safari (5 or greater). If you are experiencing trouble with the web site, please try one of these alternative browsers. If you need further assistance, you may write to he...@aps.org.


Ultrasound-driven oscillating microbubbles are used as active actuators in microfluidic devices to perform manifold tasks such as mixing, sorting, and manipulation of microparticles. A common configuration consists of side bubbles created by trapping air pockets in blind channels perpendicular to the main channel direction. This configuration consists of acoustically excited bubbles with a semicylindrical shape that generate significant streaming flow. Because of the geometry of the channels, such flows are generally considered as quasi-two-dimensional. Similar assumptions are often made in many other microfluidic systems based on flat microchannels. However, in this Letter we show that microparticle trajectories actually present a much richer behavior, with particularly strong out-of-plane dynamics in regions close to the microbubble interface. Using astigmatism particle-tracking velocimetry, we reveal that the apparent planar streamlines are actually projections of a stream surface with a pseudotoroidal shape. We, therefore, show that acoustic streaming cannot generally be assumed as a two-dimensional phenomenon in confined systems. The results have crucial consequences for most of the applications involving acoustic streaming such as particle trapping, sorting, and mixing.


Typical experimental setup. Left: The PDMS channel and the piezoelectric actuator are bonded to a glass slide. Right: Top view of the semicylindrical bubble in the microchannel and visualization of the acoustic streaming flow through different experimental particle trajectories.


Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.


Precise and contact-free manipulation of physical and biological objects is highly desirable in a wide range of fields that include nanofabrication, micro- and nano-robotics, drug delivery, and cell and tissue engineering. To this end, acoustic tweezers serve as a fast-developing platform for precise manipulation across a broad object size range1,2. There are two primary types of acoustic tweezers under development at present: radiation force tweezers and acoustic-streaming tweezers.


Radiation force tweezers, in which the acoustic radiation force acts as the trap, can be divided into standing-wave tweezers and traveling-wave tweezers. To date, most demonstrated acoustic tweezers are standing wave tweezers that use counter-propagating waves to create a mesh of standing-wave nodes and antinodes where the particles are trapped3,4,5,6,7,8,9,10,11,12,13,14. Such systems are particularly suitable for manipulating groups of particles, but the chessboard-like node network precludes object selectivity. In addition, standing wave trapping typically requires multiple transducers that surround the trapping region, which adds complexity and makes it incompatible with some application scenarios, especially those that involve fixed object inside the trapping region.


Both standing-wave tweezers and traveling-wave tweezers rely on acoustic radiation force to directly manipulate particles, whereas acoustic-streaming tweezers take advantage of the nonlinear Rayleigh streaming induced fluid flows34, and thus handle particles indirectly in fluids by creating streaming vortices35 with oscillating bubbles36 or rigid structures37,38. These devices tend to be simple devices that are easy to operate, but offer low degree of spatial resolution, because microbubble and microstructure-based phenomena are nonlinear and difficult to control2. Fluid manipulation has been demonstrated using controlled pumping39, but is limited to 2D, and requires sophisticated control over the source array.


Here we propose a hybrid 3D single beam acoustic tweezer by combining the radiation force and acoustic streaming. We exploit the Eckart streaming34 and demonstrate that, instead of being a nuisance, carefully designed acoustic streaming can be embedded in the focused acoustic vortex to create a fully 3D trap. As a proof of concept, we generated a focused acoustic vortex with a single piezoelectric transducer and a passive polydimethylsiloxane (PDMS) lens. The experimental levitation force provided by streaming reaches 3 orders magnitude larger than previously reported33, and allows a wider range of particle size, shape, and material properties. We demonstrate this three-dimensional acoustic tweezer first by simulation and experimental measurement of the acoustic field. Then the acoustic streaming flow field is measured with particle image velocimetry (PIV). Finally, levitation, trapping and 3D manipulation of a particle is demonstrated in a fluid environment.


a Creating a focused acoustic vortex for in-plane particle trapping, and the localized gradient streaming field levitates the particle, providing trap in the third dimension. Inset shows a photo of the fabricated device. b Evolution of the intensity and phase fields across different cut planes along z axis. The field is gradually focused as it propagates, keeping the spiral phase profile in the central region.


The acoustic streaming flow was simulated using another open-source tool, OpenFOAM41. The streaming effect modeling was performed in three stages as previously suggested in ref. 42: (i) simulation of the wave propagation in time domain using the compressible flow computational fluid dynamics(CFD) solver, (ii) time-averaging of the effective non-linear equation term to calculate the body force driving the acoustic streaming flow, and (iii) using the incompressible steady-state CFD solver to calculate the streaming velocity field by adding the effective external force equation term calculated in step (ii). All the required solvers are included in the default OpenFOAM distribution with minor additional code modifications required.

3a8082e126
Reply all
Reply to author
Forward
0 new messages