Theestablished rule would be to complete the tanda with other two songs by the same orchestra with the same singer recorded in the same narrow timeframe. This is easy to do with a few big names with expansive recording history. Other times, not so much. Or, the available recordings do not mesh well.
Valses that dancers enjoy the most have a clear and regular beat in the normal or swift walking tempo. I want to present three of those where the tempo does not change much, or if it does, then it evolves from slower to faster but within a rather narrow range.
Unlike tango, where it's common and often enjoyable to break the forward walk and pause for a moment, the vals should flow in a predictable manner. I avoid valses that break the flow at any point except their very beginning and end.
The "kick" would be a clear accent on the first beat in a bar, which guides the dancer and confirms that the step was on time. This strengtens the regularity of the movements the vals should provoke in dancers.
When the "kick" is not as explicit, it better still be there. "Noches de invierno" by Sexteto Cristal has it, even though it's a lyrical, softer vals. You are still compelled to keep moving on the dance floor in a regular and circular fashion.
There's the technical quality of the recording. If I have two transfers from TangoTunes, I need a third or one that sounds the same. When you close your eyes while listening to the songs, you should have the feeling that you are sitting in one room with given acoustic characteristics for all three songs.
There's the number of musicians performing. There are tango duos (guitar and singer), trios (bandoneon, piano, and violin), quartetts, quintets, sextets, and larger bands. A trio cannot create the same sound space as a larger orchestra. Therefore, I want all three songs to match in the number of musicians involved.
Then there are technical aspects that I use in all of my mixing, namely tempo and key. I have already stated that I prefer to have a narrow variance in tempo in my vals tandas. For harmonic key mixing, I will only say that I try to avoid placing two songs recorded in the same key next to each other, and will detail out the rest in a later post.
What constitutes a common element is open to interpretation. It could be the use of an uncommon musical instrument, such as the harp. It could be the way the two songs start, for instance with a heart-wrenching violin solo. In essence, I want to find some arbitrary connection.
Most importantly, I want to induce the feeling that the following songs starts a new theme while somehow following up on what's just happened. Looking at my sets, I'd use two approaches most frequently.
I would say that all three are predominantly lyrical and point in the same direction. "Violetas" comes last as it's slighly faster and even speeds up towards the end, strongly suggesting the end of a tanda.
The first two go in the lyrical direction, then the last one reverses the trend and lightens up the mood. The reverse is also common in my sets: the first song is light-hearted, then the two following it bring in the lyrical guns.
There are additional ways, obviously: one could start off with an upbeat song, follow with a lyrical one, and finish with a smile again. I've just listed two most common patterns I've observed myself using.
The dates recording range from 1943 to 1947. The orchestras appear to have been a similar size. The recordings I have are of average quality, without the added reverb and with the average noise levels.
The tempo progresses from 62 to 68 BPM. The keys are: C minor alternating with E flat major, C minor, E minor alternating with G major. Here, you can see I have broken my rule for not placing two songs in the same key next to one another.
Rules are useful, especially when one is starting out. They become guidelines as you progress, and so much of the craft is contextual that I have a hard time finding any unbreakable principles by which I would abide.
On the patch:
- remove the :static metadata, that's not used anymore
- needs docstrings, which should be written in the style of other Clojure docstrings. map is probably a good place to draw from.
- rather than declare into, defer the definition of these till whatever it needs has been defined. There is no reason to add more declares for this.
Also should consider
- whether to build a k/v vector and convert to a map, or build a map directly (the former may be faster, not sure)
- if building the map, how to construct the map entries (vector vs creating a mapentry object directly)
- in map-keys, is there any open question when map generates new overlapping keys?
- are there places in existing core code where map-keys/map-vals could be used (I am pretty certain there are)
about considerings:
> whether to build a k/v vector and convert to a map, or build a map directly (the former may be faster, not sure)
> are there places in existing core code where map-keys/map-vals could be used (I am pretty certain there are)
> if building the map, how to construct the map entries (vector vs creating a mapentry object directly)
I'll check which them as soon as possible. I haven't done it yet.
A few comments:
- Implementations that build and tear apart MapEntry's perform much worse.
- Transients should be used for large maps but not for small ones.
- This benchmark shows that the property of maintaining the type of the map in the output can be achieved without sacrificing performance (the test cases using Specter or "empty" have this property).
Implementations that call reduce-kv are not lazy so the documentation should be clarified in the proposed patch (map-mapper-v3.patch). Also, it's probably better to say "map" (as the noun) rather than to specify a particular concrete type "hash map".
-1 to this. Clojure aims to be a small core, pushing additional functionality into libraries. The problem space of compound transformations, of which this functionality is a small piece, is already thoroughly solved by Specter. Specter's MAP-VALS and MAP-KEYS navigators additionally support removal of key/value pairs during transformation by transforming to special NONE value. This expands the utility greatly.
Nathan you're making a strawman re: compound transformations. This isn't a request for a function with filtering knobs or conditional behavior (which Clojure has historically opposed). There are other valid approaches than Specter's.
Re fast implementation: Not every function has to strive for the most performant implementation, esp at the cost of branching and general complexity. A cost model for performance has to take into account complexity.
Performance is incredibly important for general data structure manipulation functions like this. Manipulating data structures is one of the most basic things you do in a language, and it's done all the time in performance sensitive code. Having to abandon the "nice" functions for ugly specialized code in performance sensitive code is a failure of the language.
map-vals/map-keys are part of a rich problem space of which myself and the users of Specter have learned a lot the past few years. Clojure barely touches this problem space, especially when it comes to nested or recursive data structures. Adding functions to Clojure that are significantly inferior in both semantics and performance to what already exists in an external library seems pretty silly to me. Just my two cents.
I agree with Nathan Martz that the performance is very important, but I still have a strong opinion that this function should be somehow imported to the core part of the language.
People use this transformation pretty often and if there is a fast implementation in the core it will be a great benefit to all of us.
Lazy values can decrease performance, when they are used for fields that are frequently accessed.
The runtime has to check, if the right hand side has been evaluated already on each access of the field. So if your calculation is not too expensive, the performance cost of the lazy handling can be greater than the savings from not evaluating some values.
Based on earlier warnings in this thread, I converted many of my lazy vals to defs, but the overall speed of my program decreased. Deciding when to use a lazy val, a regular val, or a def is not a simple matter! I guess it might help if I can get a profiler to provide me with a count of each usage of a val. Is there a reasonably simple way to get that?
A disadvantage of pure parameterless defs and lazy vals compared to simple vals is also that defs and lazy vals can close over big amount data and in effect cause memory exhaustion. Consider following code:
This seems like an opportunity for someone to develop an application that keeps track of how many times each val or method is used and how long it takes to compute. Or is something like that available already in existing profilers?
case class A(lazy val x: Long, lazy val y: Long)
^
error: lazy modifier not allowed here. Use call-by-name parameters instead
^
error: lazy modifier not allowed here. Use call-by-name parameters instead**
For example, adding a case class as an element to a default immutable set or as a key to a default immutable Map, resulting in more then five elements, respectively, will call hashCode and access every primary component of the case class, because those collections are HashSet and HashMap, respectively.
My rule of thumb would be to only use lazy val if there is something special about your design that tells you that this particular value has a high chance of not being needed. If you need a profiler to know, you probably just want a val.
val is fastest to access, but you always have to create it. def is fastest to compute (not usually noticeably faster than val but it saves memory), but you have to compute it whenever you want it. lazy val has a sizable additive penalty to computation time on first use, and a smaller additive penalty on every access, but you only have to pay those if you need it, and you only have to pay the big one once.
3a8082e126