Once you have identified your search terms, the next step is to optimise your content for Google Rich Answers. Whether you are providing a definition for a term or answering a question, you need to make this clear to the search engines and users.
Schema is a type of code that makes it easier for search engines to interpret the content and information on your web pages. By applying schema to your web pages Google and other search engines can serve relevant information more effectively and efficiently to its users.
Google also uses schema to extract data from your content to be used in its Answer Box and within Rich Snippets. Sites with a clear schema markup may be looked at favourably when Google is finding information for its results.
To find out and learn more about schema visit Schema.org and Schema-Creator.org.
Whilst it is important to optimise for Google Rich Answers, do not forget the essentials when it comes to SEO. Make sure you are still optimising your pages and blogs to their full potential.
One thing we have noticed at SpotDev is that websites appearing in the Google Rich Answers tend to be higher ranking websites anyway, so bare in mind that simply optimising a new blog for Google Rich Answers probably won't get you too far by itself.
Google Quick Answers are highly-visible text snippet answers and links placed at the top of the Google SERPs and were observed to provide a huge boost in organic traffic. Quick Answers appear up to 40% of the time for some types of queries.
Pages that Google finds and selects for Quick Answers on our site are high authority pages with quality, well-structure content that is theme-relevant and optimized for a great user experience and answer specific questions closely matching the query for the Google answer box. Below is a sample page that is featured in Google Answers.
From the technical SEO standpoint, we need to ensure that page is a part of a site structure, is referenced from other pages with theme-relevant linking and is included in XML Sitemaps for better crawling (use HrefLang XML Sitemap if pages are present in multiple geos).
Usually a page that is ranking in the first position in organic results would also be featured in Quick Answers. However, we also observe instances when our pages are in the Google Answers box, even when these pages are not top ranking pages in Google and ranking on positions two to four.
BrightEdge platform is now enabling us to do just that to score a Google Answer box or two. Quick Answers ranking report is now a part of the Keyword report, where we can measure aggregate rankings of a particular keyword group. The same report also provides details for each keyword in the group.
I recently asked a question on Stack Overflow to find out why isset() was faster than strlen() in PHP. This raised questions around the importance of readable code and whether performance improvements of micro-seconds in code were worth even considering.
My father is a retired programmer, and I showed him the responses. He was absolutely certain that if a coder does not consider performance in their code even at the micro level, they are not good programmers.
I'm not so sure - perhaps the increase in computing power means we no longer have to consider these kind of micro-performance improvements? Perhaps this kind of considering is up to the people who write the actual language code? (of PHP in the above case).
The environmental factors could be important - the Internet consumes 10% of the world's energy. I wonder how wasteful a few micro-seconds of code is when replicated trillions of times on millions of websites?
Sometimes we need to really worry about micro-optimisations, but only in very rare circumstances. Reliability and readability are far more important in the majority of cases. However, considering micro-optimisation from time to time doesn't hurt. A basic understanding can help us not to make obvious bad choices when coding such as
(Example from @zidarsk8) This could be an inexpensive function and therefore changing the code would be micro-optimisation. But, with a basic understanding, you would not have to, because you would write it correctly in the first place.
I both agree and disagree with your father. Performance should be thought about early, but micro-optimization should only be thought about early if you actually know that a high percent of time will be spent in small CPU-bound sections of code.
This knowledge comes from experience doing performance tuning, as in this example, in which a seemingly straightforward program, with no obvious inefficiencies, is taken through a series of diagnosis and speedup steps, until it is 43 times faster than at the beginning.
What it shows is that you cannot really guess or intuit where the problems will be. If you perform diagnosis, which in my case is random-pausing, lines of code responsible for a significant fraction of time are preferentially exposed. If you look at those, you may find substitute code, and thereby reduce overall time by roughly that fraction.
Other things you didn't fix still take as much time as they did before, but since the overall time has been reduced, those things now take a larger fraction, so if you do it all again, that fraction can also be eliminated. If you keep doing this over multiple iterations, that's how you can get massive speedups, without ever necessarily having done any micro-optimization.
After that kind of experience, when you approach new programming problems, you come to recognize the design approaches that initially lead to such inefficiencies. In my experience, it comes from over-design of data structure, non-normalized data structure, massive reliance on notifications, that sort of thing.
The requirements that you are developing against should have some specification of performance data, if performance is at all an issue to the client or user. As you are developing software, you should test the performance of your system against these requirements. If you aren't meeting performance requirements, then you need to profile your code base and optimize as needed to reach the performance requirements. Once you are within the minimum required performance, there's no need to squeeze any more performance out of the system, especially if you are going to compromise the readability and maintainability of the code any more (in my experience, highly optimized code is less readable and maintainable, but not always). If you can get additional performance gains without degrading the other quality attributes of the system, you can consider it.
There are instances, however, where performance is of the utmost importance. I'm mainly thinking in real-time systems and embedded systems. In real-time systems, changing the order of operations can have a huge effect on meeting the hard-deadlines that are required for proper operation, perhaps even influencing the results of computations. In embedded systems, you typically have limited memory, CPU power, and power (as in battery power), so you need to reduce the size of your binaries and reduce the number of computations to maximize system life.
When programming in assembly, your father's assertion was correct. But that is much closer to the metal than most people program these days. Today even trying to optimise yourself (without profiling) can result in making your code run slower than if you'd done something the more common way, since the common way is more likely to be optimised by modern JIT compilers.
I don't know PHP, so it really isn't obvious what isset() is meant to do. I can infer it from context, but this means that it will be similarly obscure to novice PHP programmers and might even cause more experienced programmers to double-take. This is the sort of idiomatic use of a language which can make maintenance a nightmare.
Not only that, but there is no guarantee that isset() will always be more efficient, just because it is more efficient now. An optimisation now might not be an optimisation next year, or in 10 years. CPUs, systems, compilers improve and change over time. If the performance of PHP's strlen() is a problem, PHP might get modified in the future, then all of the isset() optimisations may need to be removed to optimise the code once more.
As an example of such an anti-optimisation, I once had to sanitise a huge codebase filled with code of the form if x==0 z=0 else z=x*y because someone had made the assumption that a floating point multiply was always going to be more expensive than a branch. On the original target architecture this optimisation made the code run an order of magnitude faster, but times change.
When we tried using this code on a more modern CPU which was highly pipelined the performance was horrifically bad - every one of these statements cause a pipeline flush. Simplifying all of these lines of code to just z=x*y made the program run an order of magnitude faster on the new architecture, restoring the performance lost by the anti-optimisation.
Then we worried about doing the most work per clock cycle, now we are more likely to be worrying about pipeline flushes, branch mispredictions and cache misses at the CPU level, but locks and interprocess communication are becoming much more significant as we move to multi-process and multi-processor architectures. Drepper's paper can really help with understanding many of these issues.
Every idiot can do micro-optimization and any decent compiler/runtime will do this for you.
Optimizing good code is trivial. Writing optimized code and trying to make it good after that is tedious at best and unsustainable at worst.
If you seriously care for performance, then do not use PHP for performance critical code. Find the bottlenecks and rewrite them with C extensions. Even micro-optimizing your PHP code beyond the point of obfuscation won't get you nearly as much speed.
Personally, I liked doing micro-optimization a lot when I started programming. Because it is obvious. Because it rewards you quickly. Because you don't have to take important decisions. It's the perfect means of procrastination. It allows you to run away from the really important parts of software development: The design of scalable, flexible, extensible, robust systems.
c80f0f1006