The Martian English Full Movie Download Tamil Movie

0 views
Skip to first unread message
Message has been deleted

Денис Окунев

unread,
Jul 15, 2024, 1:15:25 PM7/15/24
to franfichruewels

I read Andy Weir's "The Martian" and saw the movie, which is supposed to be realistic science fiction. And I immediately saw a couple of huge flaws, one of which I'd like to tackle with this question:

In one scene a storm occurs on the surface of Mars. The winds are strong enough to cause humans to struggle to stand and to tear a satellite dish from its anchor. However, the atmosphere of Mars is much less dense than Earth's and I don't think even the winds of the strongest of Martian storms could have such an effect.

The Martian English Full Movie Download Tamil Movie


Download File https://urlcod.com/2yM5g9



The winds on Mars are much faster than typically on Earth, however, the atmosphere is much thinner. This can cause some unusual effects, somewhat analogous to high speed/ low torque vs low speed/ high torque.

Let's assume the satellite dish had a surface area of $1 m^2$, a reasonable assumption. Also, let's use the math from this question, and let's use the Martian listed wind, of 175 kph. That's 48.61 m/s, which is a high wind speed. That creates a pressure of

For our $1 m^2$ antenna, that exerts a force of 23.6N, or about the force of gravity with a 2.5 kg weight on it. That would probably be enough to bend the dish, if build on Earth. However, there are other things to keep in mind.

Taking all of the above into account, it could be possible to have at least a 4 times higher effective pressure, which makes the story a bit more believable. Still, I think this is one of the less scientifically accurate parts of the book, but it's not completely impossible.

The biggest inaccuracy in the movie is straight from the book, so it's also a big inaccuracy in the book. It's right at the beginning, the sandstorm that strands him there. (So this is not a spoiler; everyone knows he gets stranded there due to a sandstorm.)

I had an alternate beginning in mind where they're doing an engine test on their ascent vehicle, and there's an explosion and that causes all the problems. But it just wasn't as interesting and it wasn't as cool. And it's a man-versus-nature story. I wanted nature to get the first punch.

So I went ahead and made that deliberate concession to reality, figuring, "Ah, not that many people will know it." And then now that the movie's come out, all the experts are saying, "Hey, everyone should be aware that this sandstorm thing doesn't really work and Mars isn't like that."

The martian winds are not as strong as portrayed in the Martian. The air pressure on Mars is 1% of Earth sea level, and 200kph, while enough to notice, would not be enough to blow you over. Calculating the earth equivalent speed using the formulas in the related question posted by @1337joe 200kph on mars is 25kph on the earth, or 15mph. That's a breeze, not a gale strong enough to make you stumble even in lower gravity, and I can't imagine it ripping a satellite dish off its mountings. Even if it did come off the dish would be going only 25kph, that's more of a tumbleweed than a projectile.

From a Bogon/Martian lisitng perspective, I can get this automatically updated from the "Cymru" guys who maintain a dynamic listing. I have their BGP details but they have also stated "Communities" for this dynamic update.

set policy-options policy-statement test term TITAN-BOGES from community MASK
set policy-options policy-statement test term TITAN-BOGES then accept
set policy-options policy-statement test term last then reject

This will ensure that those martians routes are not advertized beyond your AS. Reason being if those routes gets advertised to other AS, you are susceptible to start receving traffic in your AS for those martian routes, which will be un-desirable.

Well, it will work but not in the way the OP desires. Your solution above will ensure that bogon routes received from Team Cymru wil not get advertised upstream (or somewhere else/anywhere, depending on policy). Whereas the OP needed Team Cymru BGP feed in order to block the incoming bogon routes from own upstreams.

You are right, it will not help to block prefixes received from Upstream. And the policy details shared above does not list the action part, but looking at the scenario, I assume this is RTBH and action will be next-hop discard. So it won't help block those prefixes being received from upstream but drop any traffic received for those prefixes.

Before there were neural networks and machine learnings to do all our computer talking, speech was generally synthesized by modeling the vocal cords and resonant cavities of the mouth. There's decades of research into this and probably the most well-known classic example is SAM the Software Automatic Mouth from the early 80's.

Even though I love the way SAM sounds, and just look at him, it wasn't quite what I had in mind for the martians in this game. Something less familiar would be nice for a start. Also, speech synths in this form are finely honed masterpieces of software design and who's got time for that.

Maybe the problem is that we're working with raw samples in the time domain. Let's switch to the frequency domain and try interpolating there. We want this to work on portable hardware so first break the audio clips down to a limited set of important frequencies.

I put together some numpy/scipy python code to do this part. The result is a set of frequencies and amplitudes that can be used to reconstruct the sound with sine waves. Baby's first MP3. Maybe there's something interesting we can do once everything is driven by sine waves instead of individual samples.

Sounds basically right, if a little muffled. I can hear the martians already. Now try blending between the reconstructed clips by interpolating the sine frequencies. This should hopefully be an improvement on crossfading the samples. Waveform (top) & spectrogram (bottom):

Nope! That makes a good siren but it doesn't sound like a voice. The peak frequencies can be pretty far apart for each vowel and there's a "whipping" effect when interpolating between them. I suspect the spectrum is just too sparse and it would need a lot more sine waves to avoid sounding artificial. This rabbit hole sucks. With basically no DSP or speech theory experience this was a lost cause and after a fair bit of flailing I put the whole thing down and moved on to other non-speech stuff.

Almost a year later, I came back to the speech synthesis task. Having forgotten most of the first try my plan this time was to go even simpler. Why synthesize anything at all? Just string together a bunch of audio clips and call it a day.

Well, maybe. I didn't give it much chance. This technique just wasn't grabbing me. One for being too simple and two for having so few limitations. There are infinite ways to prepare and process audio clips for sequential playback. I need constraints, the more the better.

If, like me, you've ever crossed paths with The Talking Moose on a classic Mac then you know all three requirements are handily met by a traditional speech synthesizer. The Moose is based on MacinTalk which as far as I can tell is implemented similarly to SAM.

I trashed all my previous code and researched the details of how these synths actually work. The seminal model here is the Klatt speech synthesizer. Dennis Klatt worked out a system of cascading and parallel filters applied to the fundamental waveform + noise, and all the complicated articulations necessary to sound like human voice. Back in 1980.

Brief summary. The human vocal system has cavities that amplify & resonate the vocal cords' vibrations at certain frequency bands. These bands are called formants and they change based on the shape of the mouth, tongue, lips, palette, etc. Each vowel sound has a different set of formants.

Formants are different from the peaks I was detecting in my first try in that they're independent of the fundamental pitch of the voice, and they have a bandwidth. To get the formant frequencies and bandwidths I ditched my custom python code and switched to using Praat, which is laser focused on this exact task.

Note that these are not marking sharp spikes as much as broad hilltops in the spectrum. It's somewhat surprising (to me) that these vowel formant frequencies don't vary much from person to person. From Synthesizing static vowels and dynamic sounds:

With this formant model, I wrote C code for synthesizing vowels using a sawtooth wave passed through a series of resonating filters. I'm familiar with using these kinds of filters in hardware and DAW synths. What do they look like in actual code? Basically just a weighted sum of the current sample and previous outputs. The weights are calculated from your desired filter frequency, resonance, and bandwidth. MusicDSP.org was a great resource when working on this.

Pretty much exacly like SAM and glad for it. Nothing like a desperate third attempt to slide the goalposts right up. The AH-EE-AH blend that didn't work before sounds fine now when interpolating filter frequencies:

The next step was to integrate a noise source to synthesize the 't', 'ch', 's', 'k', and other non-voiced sounds. Unlike with the vowels, these use a separate parallel filter bank. I made some good progress here before realizing that (A) this part is much trickier since it requires careful modulation to sound right and (B) the end result would be better sounding human speech, which I wasn't really after.

The final vowel synth has parameters for overall speed, input waveform, fundamental pitch (including sub-oscillator, vibrato, and randomized LFO), and formant frequencies. A set of these parameters defines the basic sound of a voice.

One trick I found to get slightly less muffled output was to fix the two highest-frequency formants at 4kHz and 6kHz. This is essentially a HF boost and adds a noticeable crispness when running at the synth's relatively low 11kHz samplerate.

7fc3f7cf58
Reply all
Reply to author
Forward
0 new messages