Scripts Aimbot

0 views
Skip to first unread message

Doris Joo

unread,
Aug 5, 2024, 3:50:04 AM8/5/24
to reboolicas
Alrightin response to some of the aimbot posts made in support, etc.. I decided to take some time and write a pixelsearching proof of concept Aimbot. I figure now when people ask for help about writing aimbots we can refer them to this thread .

I was actually surprised by the speed of PixelSearch. I was expecting it to be significantly slower than it was over such a large area. I did a few timed tests before I went forth and wrote this and the results warrented spending a little time on it. (40-60 miliseconds for a 1024x768 area!!)


However, you can tell that when the object is moving at a decent rate, pixel searching the entire screen would not be possible. In the Camper Strike example, when you have the setting "On + Autoshoot" you can see how it sometimes trails behind the faster moving targets. However, if you set it to "Snap-To Autoshoot" because it has a small search area... the targets are easily brought down.


-Snap-to (Scans an area 50sq pixels surrounding your mouse and if you get near a target it will snap to the target. It will then only scan an area of 10sq pixels while locked onto the target. Much faster)


I think the best aimbot for a FPS written in autoit (at least.. the best/easiest) would be a pixelsearch of a box around your mouse position and force you to "snap" to the heads of enemies. With the speed of pixelsearch in this proof of concept, I think it would be deadly.


Yes.. well, Lets say I was writing an aimbot for a FPS. Pixelsearching the entire screen might be a little too slow, but if I pixelsearched a reasonable square area around your mouse you would simply have to move the mouse "somewhat" close to your enemy, then it would snap to their head.


It has nothing to do with what process your AutoIt script "looks like". They progmatically block you from using commands like Send(), MouseClick(), etc, etc. Which is why the fake device driver gets around it. It emulates an actual mouse so the program can not tell the difference.


I have been using my free time to study Python scripting so that I could automate tasks at work and improve efficiency. Several of my scripts have been written to make things eaiser in Pro/E so I figured I would go ahead and share them here:


This script was written because several of my colleagues refuse to use Pro/Piping for hydraulic tubes/hoses. Because of this, we had no way of adding CNC bending data to the drawing without remodeling their parts in Pro/Piping. This script attempts to solve that problem by reading in a Pro/E point file and spitting out a CSV file containing the bend data (feed, bend, twist). It is still a work in progress, but I have posted the source code on GitHub:


This script was written at the request of our purchasing department. They were having problems with understanding what changed between revisions on large/busy drawings. The solution was to write a script that compares the PDFs of the two drawings and outputs a JPEG with deleted items highlighted in red and new items highlighted in blue.


This is sort of a poor man's data management system (I use it when I'm away from the office and don't have access to the data management system). The script will scan your working directory and grab the latest revision of every part/assembly/drawing and place them in a subdirectory with the current time stamp. This allows you to grab snapshots during different phases of a design and also allows you to roll back to a specific point very easily.


I've been doing similar work with Python. I export the tree of a part or assembly analyse it and write mapkeys that perform repetitive tasks. Of late we need to access the API in creo so that we can query the models more directly and I'm hoping even manipulate the cad data. From my reading there are Visual Basic API which can run asynchronous. I'm hoping that I can use this in Python.


what I did is just copy the code of the trail file and figured out what part of the code corresponded to the label and then in python I used the code of the trail file and generate a new trail file with the new data.


* Lua scripts require a Lua executor

* These scripts have only been verified to work on the WeAreDevs API exploits such as JJSploit

* If a script has broken features, they may have been patched

* If you found a broken script, please let us know in the forum so it can be taken down

* You can submit your own script on the forum. Be detailed and make it clear that its a submission




A few weeks ago I was at Heathrow airport getting a bit of work done before a flight, and I noticed something odd about the performance of GitHub: It was quicker to open links in a new window than simply click them. Here's a video I took at the time:


When you load a page, the browser takes a network stream and pipes it to the HTML parser, and the HTML parser is piped to the document. This means the page can render progressively as it's downloading. The page may be 100k, but it can render useful content after only 20k is received.


This is a great, ancient browser feature, but as developers we often engineer it away. Most load-time performance advice boils down to "show them what you got" - don't hold back, don't wait until you have everything before showing the user anything.


This breaks the rule, as all of page-data.inc is downloaded before anything is done with it. The server-rendered version doesn't hoard content this way, it streams, making it faster. For GitHub's client-side render, a lot of JavaScript was written to make this slow.


Although Hello! is written to the iframe, it appears in the parent document! This is because the parser maintains a stack of open elements, which newly created elements are inserted into. It doesn't matter that we moved , it just works.


Also, this technique processes HTML much closer to the standard page-loading parser than innerHTML. Notably, scripts will download and execute in the context of the parent document, except in Firefox where script doesn't execute at all, but I think that's a bug update: turns out scripts shouldn't be executed (thanks to Simon Pieters for pointing this out), but Edge, Safari & Chrome all do.


Now we just have to stream HTML content from the server and call iframe.contentDocument.write() as each part arrives. Streaming is really efficient with fetch(), but for a sake of Safari support we'll hack it with XHR.


By streaming the content via the iframe, content appears 1.5 seconds sooner. The avatars also finish loading half a second sooner - streaming means the browser finds out about them earlier, so it can download them in parallel with the content.


The above would work for GitHub since the server delivers HTML, but if you're using a framework that wants to manage its own representation of the DOM you'll probably run into difficulties. For that case, here's a less-good alternative:


This is called "newline-delimited JSON" and there's a sort-of standard for it. Writing a parser for the above is much simpler. In 2017 we'll be able to express this as a series of composable transform streams:


Versus normal JSON, ND-JSON gets content on screen 1.5 seconds sooner, although it isn't quite as fast as the iframe solution. It has to wait for a complete JSON object before it can create elements, you may run into a lack-of-streaming if your JSON objects are huge.


As I mentioned above, GitHub wrote a lot of code to create this performance problem. Reimplementing navigations on the client is hard, and if you're changing large parts of the page it might not be worth it.


Feel free to throw me an email, unless you're a recruiter, or someone trying to offer me 'sponsored content' for this site, in which case write your request on a piece of paper, and fling it out the window.


I have set up a testing project that exhibits the behavior both for some older versions of Gradle where this was still working and some more recent versions where it does no longer work: GitHub - sebkur/gradle-scripts-tests


Since these scripts are not built-in supported product features, Unbounce is unable to support or guarantee these codes in any way, shape or form. Also be aware that, while rare, the possibility exists for custom scripts to break or lose functionality as we update our app.


We also recommend leaving a comment on each tip that you implement, as it allows our product team to get better visibility into which codes/features are popular in the Community (which in turn gives us better visibility into which features we could build into the app).


If you create an additional section, you can simply hide it from the mobile view and then copy the required content into the new section in the desktop view and if all is done, hide the mobile sections from the desktop view.


Those of you who have been reading my posts here for awhile know how much I emphasize good reconnaissance. Novice hackers often jump into a hack/exploit without doing proper recon and either fail or get caught. Experienced and expert hackers know that 70-80 percent of a good and successful hack is dependent upon successful and accurate reconnaissance.


I know I have said it before, but bear with me as I say it again for the newcomers. There is NO SILVER BULLET that succeeds under all circumstances. Long before we ever begin the hack, we have spent hours, days, and maybe months doing reconnaissance. If you aren't willing to do that, you will never be successful in this field of endeavor.


Nmap is one of the few tools that every hacker should be conversant in. Although it is not perfect, it is excellent for active reconnaissance. Although I discourage the use of Windows for hacking, Nmap does have a version for Windows with a nice GUI called Zenmap. You can download it here.


The Nmap scripting engine is one of Nmap's most powerful and, at the same time, most flexible features. It allows users to write their own scripts and share these scripts with other users for the purposes of networking, reconnaissance, etc. These scripts can be used for:

3a8082e126
Reply all
Reply to author
Forward
0 new messages