Don't stop with ontologies. To get the best jobs and the most interesting work you need to have a set of skills in addition to defining ontologies.
1) If you don't know it already learn Python. It is a very easy language to learn and one of the if not the most popular for data science. One amazing thing is there is a library in Python to do just about anything. And reusing libraries in Python is very simple compared to most other languages.
2) Get familiar with a triplestore database. All the major vendors: Stardog, AllegroGraph, and OntoText have "community" versions that are much better than the standard free versions of commercial tools. Actually, there is at least one other major vendor: Amazon Neptune. I don't know if they have a decent community version. But the three I mentioned never time out and they all support all or virtually all the features of the licensed product. Usually, the limitation is how big of a graph but you aren't confined to trivial sizes. All three support decently large graphs in the free version. It is just when you do commercial work, you'll blow past those limits pretty quickly but for research and learning those tools are great. They also have graph visualization tools that completely blow away the tools in Protege. The best looking one is Stardog's. I really like the way they show individuals inside of classes, it reinforces the idea of understanding classes and instances as sets and elements. Stardog's is also the easiest to use. But for power I think AllegroGraph's Gruff tool blows away Stardog's. Stardog's visualizer is easy to use because there aren't that many things you can do with it. AllegroGraph's Gruff has exponentially more capabilities. It doesn't look nearly as good as the Stardog graphs but in terms of all the possible ways to change layout and manipulate graphs, Gruff is awesome.
3) Learn how to integrate LLMs with triplestore databases. This allows you to do so many cool things. The good news is all the vendors I mentioned above also have tools to already do this. I've only used AllegroGraph's integration but it is awesome. You can build systems using an architecture called Retrieval Augmented Generation (RAG). With RAG you replace the internal knowledge of the LLM (which is a black box and hence prone to hallucinations) with a curated corpus of documents. Like all architecture decisions, it's a trade off. You trade off the breadth of an LLM for a much narrower focus (the domain of your corpus). But in return you eliminate hallucinations and you no longer have a black box (the LLM) but explicit knowledge representation in the corpus and the ontology and knowledge graph objects that provide additional context. Here's some work I did recently:
https://www.michaeldebellis.com/post/modeling-climate-obstruction-using-a-rag-knowledge-graph
4) Make sure you have a deep understanding of SPARQL. Again, using a triplestore graph database will help here because the SPARQL implementations in Protege are terrible. They leave out huge chunks of the spec and you can only use them for very limited problems like generating labels. But each of those three vendors have great SPARQL tools. Stardog's is the best, it is an IDE for SPARQL and supports things like completion. Read DuCharme's book Learning SPARQL.
5) Make sure you know the basics of GitHub. When you work on a team it is almost a certainty they will use GitHub. IMO GitHub could be a bit easier to learn but that's irrelevant because it is the de facto standard for teams both academic and business. If you are like me and not a fan of command line tools the GitHub desktop tool is great. You can edit your repos using the same tools (e.g., Finder on Apple, File Explorer on Windows) to change your repo and then when you sync with the server, the changes you made locally are reflected on the Internet shared version.
6) Learn Apache Airflow. It is a great tool for ingesting data into knowledge graphs (it isn't unique to knowledge graphs but can be used with them just like most other kinds of databases).
7) Finally, if you want to do interesting work in industry learn Kafka, Docker, and Kubernetes. These are some of the most important open source tools for building Microservices. If you go to places like Linked In (they invented Kafka and made it open source), Reddit, Facebook, they all use these tools. Kafka is a next generation application integration tool. It's like message oriented middleware but it uses the concept of an Event rather than a message. This has all sorts of benefits but it's too complex to go into here. Also, it takes advantage of the fact that memory is so much cheaper now than when EAI tools were first invented to use massive parallelism that gives it amazing speed and fault tolerance. Docker is a container tool. You have containers that standardize different kinds of infrastructure. E.g., if people in the company use Kafka and AllegroGraph together you can create a Docker container that already has them integrated, that way everyone starts from the same foundation. You don't have the hassle of installing and configuring complex tools and you are assured that each person starts with the same configuration and integration. Docker is also a common way to package microservices. Kubernetes is the orchestration tool for Docker containers. It handles stuff like load balancing, fail over and recovery, scheduling, logging,...
8) One last thing: get familiar with Agile methods. When I was working for consulting firms I saw the difference between using Agile and other methods and it was amazing. With Agile, the users are part of the team, they want to use the new system rather than resist it which is the natural reaction because people don't like change. I recommend reading Kent Beck's little book Extreme Programming Explained. (Extreme Programming is the original name for Agile). Beck's book is (as any Agile book should be) short, to the point, and very easy to read.
9) If you want to do research the Information Sciences Institute
https://www.isi.edu/ in Marina del Rey (right next to Los Angeles) is an awesome place. They are probably the research group that is most responsible for OWL because they built a language called Loom which had many of the features of OWL. I don't know how much they are still into OWL though. Probably are but the one guy I know who is still there: Pedro Szekley, is an amazing programmer but not a fan of OWL, he prefers RDF/RDFS because he thinks OWL takes up too much space. E.g., with OWL he can't load DBpedia on his laptop... which I think is a dumb thing to want to do anyway. Although, ISI is mostly dependent on grants from the DoD, DARPA, and other federal agencies and our idiot president has slashed all sorts of research funding so they may not be hiring. That's going to be an issue for any true R&D group. They are mostly funded by the federal government which is controlled by the executive branch. Then again, the stuff that got slashed was mostly things like NSF, NASA, and EPA programs to address climate change. The DoD research (and ISI gets most of their funding from them) may still be in tact. ISI also works closely... actually it's kind of a love hate relation.. with University of Southern California. One person doing ontology work at USC is Yolanda Gil. She is doing research on Intelligent User Interfaces and knowledge representation which is what we focused on with Loom when I worked at ISI. Speaking of which, here is a paper you might find interesting about the technologies like Loom that paved the way for the Semantic Web standards:
https://www.michaeldebellis.com/post/semanticwebhistory
Another group worth checking out, actually with your Biology background this might be especially promising is the Open Biological and Biomedical Ontology (OBO) Foundry:
https://obofoundry.org/ If you are interested in Grad School Stanford is great if you can get a fellowship. They of course created Protege and while Protege isn't as active as it used to be, I think they are still doing work in ontologies.
I think University of Maryland, College Park (UMD) does significant ontology work. There is a lab called Institute for Advanced Computer Studies (UMIACS) and two faculty doing ontology work are James Mayfield and Tim Finin. Not sure if they are still there though, my knowledge of them is a bit dated but I think they are.
University at Buffalo (SUNY) is a great place for doing ontology work. Barry Smith is there. He's done amazing work... even though I personally am not a fan of BFO but BFO is the upper model used by all OBO ontologies.
Hope that helps. Good luck!
Michael