Jarvis Windows

0 views
Skip to first unread message

Denisha Cerniglia

unread,
Aug 5, 2024, 9:14:37 AM8/5/24
to todilama
Generationsof expertise ensure that all windows and doors made by H Jarvis are to a consistently high standard. The perfect combination of skilled windowmakers and the latest fabrication machinery is in place to produce windows and doors with excellent build quality for maximum longevity.

Because we specialise in supplying housebuilders we understand the issues around site management and logistics, and have an unrivalled reputation not just for the quality of our products, but the quality of our delivery and after-care service too.


Underpinning our decades of success has been our solid commitment to the highest level of customer service. From the moment you make contact with our office, we aim to deal with the enquiry promptly and efficiently. Service is the focus at every stage in the process from estimating, design and fabrication right through to delivery, fitting and post-installation checking.


With all the talk about chatbots such as ChatGPT, it's easy to forget that text-based chat is just one of many AI functions. The ideal generative AI would be able to work across different models as needed, interpreting and generating images, audio and video.


Enter Jarvis, a new project from Microsoft that promises one bot to rule them all. Jarvis uses ChatGPT as the controller for a system where it can employ a variety of other models as needed to respond to your prompt. In a paper published by Cornell University, Microsoft researchers (Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu and Yueting Zhuang) explain how this framework works. A user makes a request to the bot, it plans the task, chooses which models it needs, has those models perform the task and then generates and issues a response.


The chart below, provided in the research paper, shows how this process works in the real world. A user asks the bot to create an image where a girl is reading a book and she is positioned the same way that a boy is in a sample image. The bot plans the task, uses a model to interpret the boy's pose in the original image and then deploys another model to draw the output.


Microsoft has a Github page where you can download and try out Jarvis on a Linux-powered PC. The company recommends you use Ubuntu (the outdated version 16 LTS specifically), but I was able to get the main feature of it -- a terminal based chatbot -- working on Ubuntu 22.04 LTS and on Windows Subsystem for Linux.


However, unless you really like the idea of messing with configuration files, the best way to check out Jarvis is by using HuggingGPT, a web-based chatbot that the Microsoft researches have set up at Hugging Face, an online AI community which hosts thousands of open-source models.


If you follow the steps below, you'll have a working chatbot you can show images or other media to and ask it to output images as well. I should note that, like other bots I've tried, the results were very mixed.


1. Obtain an OpenAPI API Key if you don't already have one. You can get it at OpenAPI's website by signing in and clicking "Create new secret key." Signing up is free and you will get a free amount of credit, but you will have to pay for more if you use it up. Store the key somewhere like in a text file, where you can easily get to it. Once you copy it, you can never get it again.


4. Edit the configuration files and enter your OpenAI API key and Hugging Face tokens where appropriate. They are config.azure.yaml, config.default.yaml, config.gradio.yaml and config.lite.yaml. In this how-to, we'll only be using the gradio file, you it makes sense to edit them all. You can edit them using nano (ex: nano config.gradio.yaml). If you don't have these API keys you can get them for free from OpenAI and Hugging Face.


5. Install Miniconda if you don't have it installed already. You'll need to download the latest version from the Miniconda site. After downloading the installer, you install it by going to the Downloads folder and entering bash followed by the install script name.


You'll be prompted to agree to a license agreement and confirm the install location. After you have installed Miniconda, close and reopen all terminal windows so that the command conda will now be in your file path. If it is not in your path, try rebooting.


Using the gradio server is just one possible way to interact with Jarvis under Linux. The Jarvis Github page has more choices. These include using the models server or starting a command-line based chat.


I couldn't get most of these methods working (the command line chat worked ok but wasn't as nice an interface as the web one). Also, you may be able to install more models and get text-to-video generation going (which I could not).


The bot can answer standard text questions, along with queries asking about images, audio and video. It can also potentially generate images, sound or video for you. I say potentially because, if you use the web version, it's limited by whatever free models it can access from Hugging Face. On the Linux version, you may be able to add some additional models.


There are some sample queries listed below the prompt box that you can click and try. These include feeding it three example images and having it count how many zebras are in them, asking it to tell a joke and show a cat picture or asking it to generate one image that looks like another one.


Since it's web-based, the way to feed it images is to send it the URLs of pictures that are online. However, if you are able to use the Linux version, you can store images locally in the JARVIS/server/public folder and refer to them by relative URLs (ex: /myimage.jpg would be in the public folder and /examples/myimage.jpg would be in the examples subfolder of public).


Most original queries I tried did not turn out particularly well. Image recognition was particularly poor. When I fed it images of M.2 SSDs and asked where I could buy one, it said that it had identified the SSDs as either a suitcase and then told me to find "a store."


Similarly when I fed it a screenshot from Minecraft and asked it where I could buy it, it falsely claimed that it saw a kite flying through the air. It thought an RTX 4070 was a black and white photo of a computer. And when I asked where I could buy one, it said "you can purchase one of these items from our online store or from a variety of retailers near you." but there was no actual link to any real online store.


In short, apart from the specific examples Microsoft suggests, most queries did not turn out particularly well. But as with other AI frameworks such as Auto-GPT and BabyAGI, the problem is in the models you use and, as the models improve, so will your output. If you want to try autonomous agents, check out our tutorials on how to use Auto-GPT and how to use BabyAGI.


this is my code. however, when i run it there is no error message and nothing seems to happen. i am kind of lost whats wrong. it was supposed to answer "I am fine" but nothing happens. i am using windows 10. and python 3.6.3 the code seems to be for linux. but i dont know why? and even if so how can edit or write a code for creating a simple jarvis for windows using python.thank you.


Hello. I am a new Joplin user, and I am about to start using it with the Jarvis plugin. I created a new Open Api key then entered it into the Jarvis settings. I started receiving an error pop-up that is now hindering me from accessing Joplin. I keep clicking 'Ok' and even 'Cancel', but the same error message pops up preventing me access to Joplin. What is the best course of action on preventing this issue. My OpenAi api key is from a new alternative account so I should not have any errors like this one unless I need a paid subscription.


I fixed it. I simply created a new key, copied it, and pasted in the entry, then deleted the former key. I went back over all of the selections and reset it at default. The error went away. I can't say what the problem was except maybe a character space at the end of the key may be the culprit.


To be honest @davadev, I haven't been able to run the GPT4All docker since I moved to Apple Silicon exclusively (apparently it's an issue). I was able to run models and test the API on an Intel machine, but I don't have access to it anymore.


Unfortunately, I was also unable to add native support to enable local hosting of LLM models in Jarvis (despite numerous attempts) due to technical issues, so I can't support Mini Orca (or other models) directly.


LM Studio was recently brought to my attention. I was a little hesitant at first, because this looks like a free commercial product and closed source. But it's definitely user-friendly. They have a GPT4All-like GUI, but in addition to a chat window, they also have a server window that let's you host your local LLM. For me it worked very well. I updated the Jarvis guide with setup instructions.


The only problem I have now is that the input length exceeds the context length. (Error in LMSTUDIO) I tried to change the max Tokens (2048) and Memory Tokens(512) in Jarvis Settings, but I still get about 7000 (Tokens?) on the server side.


PS1: Is amazing how far Jarvis had come in this year!

PS2: I have few Ideas how to improve the performance of Jarvis even if weaker model is used, but firstly I have to get the offline model running. I was inspired by this video GPT4 playing Minecraft I think that one could use carefully crafted system prompt and Context prompt in Jarvis to Improve model performance as they did in the video when they taught the GPT-4 to play mimecraft only by carefully crafting prompts without training the underlying model on minecraft data...I think this could be the way how to get good enough results even with smaller offline models...


I see on server side that Jarvis attaches to user prompt this instruction. "Respond to the user's prompt above. The following are the user's own notes. You you may refer to the content of any of the notes, and extend it, but only when it is relevant to the prompt. Always cite the [note number] of each note that you use." I think it is a great prompt however I would like to tweak it little bit. Could you make it possible to adjust this prompt? I haven't seen it anywhere in Jarvis Settings.

3a8082e126
Reply all
Reply to author
Forward
0 new messages