ChatGPT take on Academia and Enterprise

The following is the version authored by ChatGPT of Steven Zimmerman’s article titled ‘Academia and the Enterprise‘.   This version is notably shorter than the original article, and the facts  that remain are all correct.  However, ChatGPT has removed many anecdotes, including the mention of SkyNet (should we be concerned).   Which version do you prefer?

Academia and the Enterprise

It is an honour to be asked by a highly respected contributor to the enterprise search community to share my journey from academia to the enterprise. Admittedly, my journey has been unusual, so perhaps it’s best to provide some context before diving into the details.

Now

I am currently a Senior Data Scientist in the NLP team at a large multinational, and I can confidently say that there has never been a more interesting time to work in search and NLP. My journey into this field began 10 years ago, and it has always been fascinating, but the latest generation of large language models (LLMs) has made the work even more interesting.

A former colleague introduced me to ChatGPT on December 1st and claimed that it would be as big as Google. Now, just over four months later, I tend to agree with this assessment. The initial impact of ChatGPT is so significant that even South Park recently aired an entire episode about its powers and related dangers, co-written with ChatGPT nonetheless. It’s noteworthy that there is yet to be an episode devoted to the release of Google.

Admittedly, there is nothing that new with respect to ChatGPT as it builds upon an existing body of research in the space of generative AI. While there has been buzz around models like DALLE and deep fakes in recent years, ChatGPT is the first generative LLM that has garnered mass attention and permitted easy interaction.

Personally, I was blown away by ChatGPT as it was the first AI-based interactive dialogue system that had a feeling of being “real”. However, I quickly realised that there were big holes in many of the legitimate sounding responses it gave, which those in the business of AI and NLP refer to as “hallucination”. This raises questions about whether we should place so much belief in a capability from which its designers caution us that it will “hallucinate” from time to time.

For me, this question ties directly back to my academic research in the somewhat recent emergent field of interactive information retrieval (IIR), which focused on risk mitigation of harms on the Web. Due to this latest technology, there has never been a greater potential for harm, and paradoxically there has never been a greater potential for benefit. It turns out that there has never been a more important time for IIR to play a role in the development of methods and evaluation approaches for the safe usage of this capability. ChatGPT opens up many new research avenues to explore, and the research possibilities on the Web and in the Enterprise are not only massive but also highly important.

Before Now

It may interest some of you to know that I come from a family of computer scientists who have worked for large tech companies. However, I was initially hesitant to follow in their footsteps due to their gruelling work hours. Nevertheless, I found myself working in computing after finishing undergrad when job opportunities were scarce. While working as a contractor in various menial jobs, I took a few computing courses at Northeastern in Boston and soon found myself working full-time as a programmer at a large financial company.

After five years in technology, I took a break to explore the possibility of pursuing graduate studies in atmospheric physics at Cornell. After a couple of years of studying the fundamentals of atmospheric science, I realised that I was more interested in the computing aspects and less interested in deriving the fluid dynamics of the atmosphere. Though I developed my abilities to solve difficult problems independently while at Cornell, I no longer felt excited about an academic career in atmospheric sciences.

Around 2013, I first heard about NLP and the emerging field of data science through a well-known article on the topic, which sparked a flame in me. A well-timed life event led me to relocate to England, and I had the opportunity to join a newly created MSc programme that focused on NLP and search. At the London Text Analytics meetup, which was co-run by Udo Kruschwitz and Tony Russell-Rose, I connected with many companies that were hiring, including the small startup in a garage in Belsize Park that I interned at between my first and second year of my MSc. That startup has now grown into a much larger company called Signal AI.

After completing my MSc, I found full-time work in the data science team of a large newspaper, where I developed document classification pipelines and prototype recommender engines. Timing played an important role here too; Udo Kruschwitz contacted me about an ESRC-funded research grant that looked at human rights in the digital age, which aligned with my concerns about online misinformation campaigns. Specifically, I was very concerned about the false claims surrounding the Brexit referendum. This led me to focus my PhD research on harm mitigation on the Web, initially on hate speech mitigation but then pivoting towards the consideration of the human in the system.

Around the time I submitted my paper on this topic to LREC for review, I attended the Autumn School for Information Retrieval and Foraging (ASIRF) at Dagstuhl and read Daniel Kahneman’s “Thinking, Fast and Slow”, lent to me by a fellow PhD student in the Psychology department who researched judgement and decision making in medicine. Attendance at ASIRF introduced me to many great researchers, most notably David Elsweiler, who lectured on the fundamentals of IIR studies. The book and the Autumn school were the foundation for a rapid update to my PhD research plan to include the consideration of the human in the system. This shift in research led to co-authored papers with David Elsweiler and the aforementioned PhD student (Alistiar Thorpe).

Concurrently with my PhD research, my advisor encouraged me to explore avenues in the private sector. He connected me with an enterprise search expert at a large energy company in London, which led to an internship that took place during my PhD. This internship transitioned to my current full-time role as a search and NLP researcher in the private sector. My research is predominantly in the private sector and heavily focused on enterprise search. Applications of NLP and search have interested me from the first day I set foot in the field.

I close with some key learnings from my experience.

When considering an advanced degree in Search/NLP

  • It’s beneficial to take an interdisciplinary approach to your research. While my core research was in computer science, it also considered a broad set of fields. In today’s world, we can’t afford to take a narrow view of the problems we face.
  • Pursuing a PhD is a massive commitment, and I strongly advise against self-funding.
  • While ideology can be a great motivator for research, it’s important to be prepared to let it go. My experiences with hate speech research taught me a lot about this matter.

For those pursuing or recently enrolled in a PhD program, here are some helpful tips:

  • Dive into hands-on work early on in your PhD. Start building experiments and aim to publish your findings as soon as possible.
  • Consider applying for a doctoral consortium, such as the one offered by SIGIR. This is a fantastic opportunity to connect with other researchers in your field and gain valuable experience.
  • Attend summer schools to expand your knowledge base and build connections with potential co-authors. For example, both the ASIRF and the summer school for Bounded Rationality at the Max Planck Institute for Human Development are great options.
  • Consider doing an internship or placement at a company to get a sense of whether academia, the private sector, or a combination of the two is the right fit for you.

When it comes to choosing between academia and industry, it’s important to understand that it’s a spectrum, and you need to find what’s right for you after your PhD. There are several considerations and possibilities to keep in mind:

  • Evaluation is much more straightforward in academia than in the private sector. Academia offers greater experimental control, while industry has many moving parts and people to work with.
  • Pure industry jobs tend to pay more, but pure academia offers more freedom (although this freedom has eroded in recent years).
  • Industry also offers the opportunity to investigate interesting research problems in search and NLP, but the problem is typically business-driven, making it easier to define.
  • Some private sector companies offer research positions that allocate some time for academic work outside of the company.
  • It’s common for individuals with full-time academic appointments to do side research in the private sector.
  • It’s possible to work in the private sector and still maintain an academic affiliation to conduct research on the side.
  • If you’re interested in a full-time academic appointment, it’s important to talk to people in that field and fully understand the responsibilities involved, which are quite different from a PhD or post-doc. You’ll also have to create course syllabuses, teaching slides, grade assignments, and do administrative work.

Leave a comment

Your email address will not be published. Required fields are marked *