After trying to understand these concepts more than anyone should. I believe the majority arguments come from trying to put OO on a binary scale. We are talking about paradigms, not complete strict definitions. Any strict definition will fail to capture a paradigm properly.
When I talk about paradigms, I mean the way you attack, analyse, modify the problem and solution. It's a way of thinking. And a programming language captures that paradigm as well as little you have to "translate" what you are thinking into code.
OO comes from the notion that you only have objects and you can manipulate them. Where each object has it's own behavior and state. You can make them communicate with each other.
The main idea was a recursion on a computer. [
http://www.vpri.org/pdf/hc_smalltalk_history.pdf]
Smalltalk’s design–and existence–is due to the insight that everything we can describe can be represented by the recursive composition of a single kind of behavioral building block that hides its combination of state and process inside itself and can be dealt with only through the exchange of messages. Philosophically, Smalltalk’s objects have much in common with the monads of Leibniz and the notions of 20th century physics and biology. Its way of making objects is quite Platonic in that some of them act as idealisations of concepts–Ideas–from which manifestations can be created. That the Ideas are themselves manifestations (of the Idea-Idea) and that the Idea-Idea is a-kind-of Manifestation-Idea–which is a-kind-of itself, so that the system is completely self-describing– would have been appreciated by Plato as an extremely practical joke [Plato].
In computer terms, Smalltalk is a recursion on the notion of computer itself. Instead of dividing “computer stuff” into things each less strong than the whole–like data structures, procedures, and functions which are the usual paraphernalia of programming languages–each Smalltalk object is a recursion on the entire possibilities of the computer. Thus its semantics are a bit like having thousands and thousands of computer all hooked together by a very fast network. Questions of concrete representation can thus be postponed almost indefinitely because we are mainly concerned that the computers behave appropriately, and are interested in particular strategies only if the results are off or come back too slowly.
Now the OO-ness of a language depends how well you can take that way of thinking and translate it into code, and how well the code matches that way of thinking.
In that sense in Go, Java, C++, you first have to translate Objects into Classes/Types before you can write them down. Similarly you cannot arbitrarily modify Objects, you have to modify Classes. This is the translation layer, which makes it less OO -- however the way you think might still be OO. In Self and JavaScript you can create objects directly and manipulate them directly, which means less translation. In that sense, even Unreal Engine Editor is better in capturing the OO-ness of things, you can directly manipulate, make them interact etc.
You can try to pin down features that a language should have to support the OO-way, however all of them wouldn't be perfectly capturing the paradigm. Just as in "Blind men and an elephant" cannot aptly describe things, there's always something missing. Each such feature will capture some part of OO and some part of something else.
Let's take the right-panel in UE.

It represents very well an Object, you can modify it, you can change its behavior, you can make it communicate with some other Object ... however, there are no visible methods, inheritance, polymorphism ... but it still manages to capture essential parts of OO than some languages. When you modify, you think very concretely about objects, hell you even can see them being changed on the screen. Then again, I wouldn't want to write a web-server using it.
I find little use in finding how much OO-ness Go captures. I care about finding better ways of thinking about problems and more than that, about solving real problems.
+ Egon