Jarvis Features

0 views
Skip to first unread message

Carmine Osterland

unread,
Aug 5, 2024, 12:45:06 AM8/5/24
to sibudreber
JustA Rather Very Intelligent System (J.A.R.V.I.S.) was originally Tony Stark's natural-language user interface computer system, named after Edwin Jarvis, the butler who worked for Howard Stark. Over time, he was upgraded into an artificially intelligent system, tasked with running business for Stark Industries as well as security for Tony Stark's Mansion and Stark Tower. After creating the Mark II armor, Stark uploaded J.A.R.V.I.S. into all of the Iron Man Armors, as well as allowing him to interact with the other Avengers, giving them valuable information during combat. During the Ultron Offensive, J.A.R.V.I.S. was destroyed by Ultron, although his remaining programming codes unknowingly continued to thwart Ultron's plans of gaining access to nuclear missiles. His remains were found by Stark, who uploaded them into a synthetic body made of vibranium and, in conjunction with Ultron's personality and an Infinity Stone, an entirely new being was made: Vision. J.A.R.V.I.S.' duties were then taken over by F.R.I.D.A.Y.

In Iron Man 2, J.A.R.V.I.S. has access to security-related devices, such as a passcode to prevent intruders from entering the basement, shown when James Rhodes is attempting to access his Hall of Armors and don the Mark II to stop Tony Stark. In Iron Man 3, and we can say that he destroys all of the armors, by the order of the "Clean Slate Protocol", resulting in fireworks, as a sign of devotion by Tony to Pepper Potts.


In Marvel's The Avengers: Iron Man - Mark VII, J.A.R.V.I.S. appears as character in the digital comic application that the user can interact with. He helps Tony Stark build his first few suits, and the dialogue of J.A.R.V.I.S. can either be placed on narration mode or activated by simply tapping on the desired word in his dialogue that the user would want to listen to.


J.A.R.V.I.S. appears as an assistant to the user in the application for Apple mobile devices. The application has many features that enable the user to interact with J.A.R.V.I.S. and do the many things he does to help the user just like he does with Tony in the Iron Man films.


One option, as you suggested, could be to follow links when searching for related notes. In fact, this might also be helpful when updating the related notes panel. This way, any links that appear in the chat note will be taken into account in the search for related notes (excluding the links that Jarvis generates to cite notes).


Another option could be to add Joplin search to your prompt. For example a line in the prompt such as: "search: DeLorean notebook:Automobiles". Jarvis will use the top hits from Joplin's search results and take the most relevant excerpts only from these notes (there's still a need to select excerpts because the notes may be too long to fit the maximal allowed length for the prompt). The search prompt itself will not be sent to GPT.


Shikuz I tried this plugin and experimented a bit with the "chat with notes" feature. I am writing a fiction book and I was intrigued by the idea of asking questions about my creation. This produced some very impressive and funny results. Thank you for this great plugin.


I did have some challenges though. For example, for some chapters of my book, it didn't find any related notes, so I couldn't chat with the opened chapter. I tried updating the Jarvis database, but it didn't change anything. This also made me concerned whether Jarvis was taking into account the whole context of my book when it responded to other notes that had found "related notes".


I mentioned that this was a funny experience. This was because I told Jarvis to take on the role of one of my book characters so that I could talk to it. This "chat with notes" feature helped me to see what impression my characters made.


Perhaps try to set a lower threshold for "Minimal note similarity" in the settings. v0.4.4 (coming out very soon) will include additional tools to help in this case, such as letting the user specify the notes they wish to chat with.


You could define the role that you wish Jarvis to play by describing it in the chat. I haven't thought of a smooth way of integrating roles into chat without making the user interface too clunky. But if you think you'll find it useful, I believe it will be possible to add note-based context to "Ask Jarvis".


This release adds a number of experimental features that help you shape the context that Jarvis gets from your notes. Most of these are disabled by default until we gather more feedback, so look them up in the settings.


Recent Joplin user (just so I could try it with Jarvis!). Have you considered an "always on" mode in which Jarvis auto-tags notes (maybe for a given notebook) and possibly periodically updates some form of internal knowledge base with summaries, etc? The thought is that the combination of Joplin as storage, tags as metadata, etc that you could craft something more "live" and organic, and reducing the cataloging-efforts of the user. Essentially I'm thinking of something akin to what each agent has in the Generative Agents: Interactive Simulacra of Human Behavior paper: a memory with derived aggregate impressions of sorts?


I've been thinking about auto-tags and auto-titles (or 1-line summaries) for some time, yet it's still in the backlog (originally planned for v0.3.0, but it got pushed back as other features demanded all of my attention). One challenge that I had in mind was to try to mimic the style / system of the user, such as the vocabulary of tags that they prefer to use, and the way they use these tags. I'd still like to try to implement it at some point, but there's a chance that this is best done by training a model on the user's notes.


An "organic internal knowledge base" is also something that the recent note DB / embeddings / related notes tries to achieve, albeit through a different approach. Finally, although it's not an automatic background process, you can already start experimenting with quickly-generated auto-summaries and auto-tags using prompt templates (see the comment here on developing your own workflow). I'd be happy to hear about such experiments if you do.


Shikuz I tried out the new features. I was sometimes able to get Jarvis to impersonate my fictional characters even better by using "chat with notes" with a prompt instructing Jarvis to impersonate my character along with an embedded search for relevant keyword, but it only seemed to work in some cases. (Probably depending on the quality of the prompt and search keywords.) In other cases it looked like I was getting a random answer that was not very relevant to my question.


I wondered if I could get a more consistent conversation with the impersonated character. I thought that if I wrote some fictional conversations with the character, I could use those notes to feed Jarvis's personality... My idea was to write something like this:


I would then use a new note to instruct Jarvis to pretend to be ELIS and then ask a question for ELIS and also include "Search: ELIS". That way he would only find messages from ELIS and include them if they were relevant to my question. This doesn't seem to work, probably because I don't have enough relevant text. I wanted to ask you if you think this could work if I had written enough fictional conversations...


Or maybe you have a better way of achieving this... Basically, my idea was to be able to adjust Jarvis' personality and "memories", simply by controlling which fake past conversations are fed to Jarvis as a basis for his answers...


@shikuz Maybe it would work if Jarvis had some sort of profile settings. Just like Bing has 3 different modes, Jarvis could have different profiles. Each profile could have different notebooks as sources and also different model temperatures, note similarity setting... I find that depending on which notebook I am browsing, I need to use a slightly different Jarvis setting to get the optimal result.


work? In the chat preview I seem to get different "relevant notes" that I think have nothing to do with my search query. ( at best it is word match). And if it does work, could you make some settings that would allow me to exclude semantically similar notes and only process the notes and notebooks to which I have narrowed the search?


@JamesWriterNarry I did the following experiment: Downloaded a couple of scripts (removed the title of the film and the credits, but left the character names untouched), each into a single note, and tried to talk to the characters. I only requested that the note that contains the script will be the single source for all context sent to the model (see one example below). I think that Jarvis was able to get into character to some extent, refer to events and to interactions with other characters (not all of them, of course, but some).


User: I want you to act as the character Vincent, think like he does, feel the way he does, and talk like he does. The script below contains examples of conversations that Vincent had. What's your favorite food?

Notes: 4a935661db4e443ba2cb85d801a5eac3


Jarvis: As Vincent, my favorite food is definitely a good steak. But speaking of food, do you know what a five-dollar shake tastes like? Mia had one at Jackrabbit Slim's and I gotta say, it was pretty damn good. [Note 1]


The only problem is that it's not a very good experiment, as I could push Jarvis to guess the name of the film in all the cases I checked (even ones much less famous than Pulp Fiction). Therefore, it's possible that the character was also based on the model's training, which isn't applicable in your case (I do imagine that the model can generally detect archetypes easily and impersonate them, based on a few clues / descriptions / examples). However, it's worth noting that without the context of the script - just giving Jarvis the name of the movie and the character to play - I did not get good results.


To conclude, I think that your approach has merit, but I recommend using the Notes: command instead of search if you're creating a special note(s) for the sake of the conversation, or already know in advance which of your notes are highly relevant. It remains to be seen how many examples (or how long of a conversation) the model needs in order to calibrate itself to the character.

3a8082e126
Reply all
Reply to author
Forward
0 new messages