Predictions about the potential impacts of generative AI may be hugely overblown because of "many serious, unsolved problems" with the technology according to Gary Marcus, one of the field's leading voices.
maybe we should not be building our world around the premise that it is
I feel like this is a really important bit. If LLMs turn out to have unsolvable issues that limit the scope of their application, that’s fine, every technology has that, but we need to be aware of that. A fallible machine learning model is not dangerous; AI-based grading, plagiarism checking, resume-filtering, coding, etc. without skepticism is dangerous.
LLMs probably have very good applications that could not be automated in the past but we should be very careful of what we assume those things to be.
I feel like this is a really important bit. If LLMs turn out to have unsolvable issues that limit the scope of their application, that’s fine, every technology has that, but we need to be aware of that. A fallible machine learning model is not dangerous; AI-based grading, plagiarism checking, resume-filtering, coding, etc. without skepticism is dangerous.
LLMs probably have very good applications that could not be automated in the past but we should be very careful of what we assume those things to be.