LinkedIn and 3rd parties use essential and non-essential cookies to provide, secure, analyze and improve our Services, and to show you relevant ads (including professional and job ads) on and off LinkedIn. Learn more in our Cookie Policy.
The classic image that comes to mind when you hear someone say they studied philosophy is that they live in an imaginary world alone with their thoughts, having no bearing or relationship with reality. Arm-chair thinkers living in an ivory tower, so to speak. While this is most certainly true in some areas (ontology, I'm looking at you), it is not the case in the vast majority of philosophical areas of study. Philosophy is one of the most important endeavors in human history and informs nearly every aspect of our lives even if we don't see it. When we determine what sources of information are more credible or not, we employ the field of philosophy known as epistemology (study of knowledge and its valid means), when we determine what is right or wrong, we employ the field of philosophy known as ethics, much of our political rights stem from ideas that developed in philosophical thought on free will, metaphysics, ontology and other areas.
One of the most important skills gained from studying Philosophy is critical thinking and innovative thinking. The former is something most people expect from learning Philosophy but not the latter. Studying philosophy requires you to reason through your assumptions and the ideas of others, finding the problems in both and also finding a solution. This is the key point of Philosophy: the idea of finding problems and solutions for those problems, some of these problems are theoretical and other problems are practical and real. This is what innovation requires, the finding of a problem that exists in the world and also finding or creating the solution.
Businesses exist because there are problems that people and society want to solve or minimize. Businesses begin because they have ideas to address those problems, sometimes they solve it efficiently and other times not. Here is the value of having philosophical education employed in a business setting, Philosophy requires that problems be broken down into specific and definable statements/ideas which then should be resolved one by one with clear and definite solutions. Philosophical processes involve taking a problem, being able to abstractly think about it and develop a practical and articulable solution. Philosophy requires a deep dive into understanding what premises underlie any process, methodology, solution or idea.
Philosophers aren't content with solutions that look to provide a short term solution or "this is the way it has always been done" mentality. They help to disrupt normal ideas and processes. In fact, that is how philosophy has grown over history and how we see technology impacting our world today. Another great benefit of having philosophical training in the business environment is the ability to communicate in a concise and convincing manner. Philosophers learn how to write convincingly so as to ensure people reading understand the points made and find the arguments or ideas inevitable.
My own philosophical training has helped me in key moments throughout my career as a lawyer and now as a Business/Operational Head. It has helped me separate the wheat from the chaff, so that I know clearly what the problems are and then look at the various reasons for the problem, including the underlying process or idea that the problem is connected to, thereby allowing me and my team to creatively design solutions without necessarily being constricted by the underlying process.
Businesses and corporations around the world would greatly benefit by looking more to their employee's and leaders' ability to critically think, reason and innovate as being just as important, if not more important than understanding processes or having a business education. The latter is much easier to learn than the former, just as when companies look to hire people having better behavioral qualities over exact skill set or experience. Philosophical thinking can bring much-needed disruption to traditional processes and business thinking, which can drive new ideas, methodologies, and solutions into the organization.
What is the responsibility of the builder in the process of developing new technology products? The question reaches beyond company culture, internal policies, or governmental regulations. Ethical considerations need to be central to the product development lifecycle. The builders of technologies need to raise questions about their potential harm, such as the impact the technology can have on non-users, and the unintended consequences of their product, for example, a social media platform becoming an avenue for misinformation. A shift in approach to consider ethical implications as part of the product development cycle would add value across many areas and industries, ranging from the government through the non-profit sector and academia to consumer tech and consulting.
Kathy Pham is a Fellow and Faculty of Product Management and Society at the Harvard Kennedy School. She is a product leader, computer scientist, and founder who has held roles in product management, software engineering, data science, consulting, and leadership in the private, non-profit, and public sectors. She currently also serves as the Deputy Chief Technology Officer of the Federal Trade Commission in the United States, Senior Advisor at the Mozilla Foundation, and Product Advisor at the United States Digital Service. Her expertise lies at the intersection of technology, ethics, and responsibility, with a focus on ethical principles in practice in product management, design, and engineering.
Todd Haugh is an Associate Professor in the Department of Business Law and Ethics at the Kelley School of Business, Indiana University. His research focuses on business and behavioral ethics, moral decision-making and critical thinking, sentencing and punishment for economic crime and public corruption, and white collar and corporate crime.
Emma Pierson is an Assistant Professor of Computer Science at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion, and a computer science field member at Cornell University. She develops data science and machine learning methods to study inequality and healthcare.
Eugene Spafford is a Professor in the Department of Computer Science and Executive Director Emeritus of The Center for Education and Research in Information Assurance and Security (CERIAS) at Purdue University. Spafford's current primary research interests are in information security, computer crime investigation, and information ethics. He is recognized as one of the senior leaders in the field of computing.
The ethical questions surrounding Natural Language Processing (NLP) pertain to how NLP systems are used, perceived as human rather than machine-generated, and who has access to them. As NLP depends on access to large amounts of publicly available text and massive computational power, it raises concerns about privacy, consent, and sustainability. Large, public data sets are human-generated and contain human bias, which can be reflected in predictions in the NLP models and skew their representativeness. Lastly, NLP prioritizes frequently spoken languages, largely English, which can widen the global digital divide. Ensuring the ethical development of NLP requires human-based interventions. This can be done by asking who will benefit from the system and who might be harmed, whether raw data is representative or reinforces bias, and ensuring that our NPL model training is objective, among other strategies.
Dan Goldwasser is an Associate Professor in the Department of Computer Science at Purdue University. He is broadly interested in connecting natural language with real-world scenarios and using them to guide natural language understanding.
The process of designing an autonomous vehicle reveals that technology design often reflects our larger disagreements over different ethical principles. As in other areas of our lives, the application of ethical principles becomes more complex as technology designers have to weigh desirable features that contradict each other, such as perfect vehicle safety versus affordability to consumers. However, applying ethical principles in technology opens a discussion about competing values and a consideration of the possibility of compromises and trade-offs that lead to reasonable and compassionate solutions.
David Weinberger is an author whose most recent book, the award-winning Everyday Chaos, presents a unique perspective on the rise and importance of machine learning. His work has been published in Wired and Harvard Business Review, as well as in Scientific American, The NY Times, Washington Post, and more. He has given hundreds of keynote speeches around the world, including recent talks on what ethics can learn from AI and the shift in our most ancient strategies for thriving as citizens and businesspeople.
Today, due to growing computing power and the increasing availability of high-quality datasets, artificial intelligence (AI) technologies are entering many areas of our everyday life. Thereby, however, significant ethical concerns arise, including issues of fairness, privacy and human autonomy. By aggregating current concerns and criticisms, we identify five crucial shortcomings of the current debate on the ethics of AI. On the threshold of a third wave of AI ethics, we find that the field eventually fails to take sufficient account of the business context and deep societal value conflicts the use of AI systems may evoke. For even a perfectly fair AI system, regardless of its feasibility, may be ethically problematic, a too narrow focus on the ethical implications of technical systems alone seems insufficient. Therefore, we introduce a business ethics perspective based on the normative theory of contractualism and conceptualise ethical implications as conflicts between values of diverse stakeholders. We argue that such value conflicts can be resolved by an account of deliberative order ethics holding that stakeholders of an economic community deliberate the costs and benefits and agree on rules for acceptable trade-offs when AI systems are employed. This allows AI ethics to consider business practices, to recognise the role of firms, and ethical AI not being at risk to provide a competitive disadvantage or in conflict with the current functioning of economic markets. By introducing deliberative order ethics, we thus seek to do justice to the fundamental normative and political dimensions at the core of AI ethics.
795a8134c1