It helps if you had some of this stuff in college.
The gist of it is that you want to have your data in many dimensions because that allows you to fit your facts in more easily – but multidimensional data starts behaving in some really counterintuitive ways. For example, data points become harder to distinguish for complicated math reasons.
That’s really annoying and people doing AI should be mindful of it or their AI ends up underperforming.
I’m more familiar with the course of dimensionality with respect to clustering (aka looking at a bunch of data points and trying to find groups). Clustering quickly becomes really expensive as your dimensions go up and that can make your entire approach prohibitively expensive.
Note that the article mentioned doing kNN (k-Nearest-Neighbor, a clustering algorithm) in 700 dimensions, which kinda sounds like a very good reason why training a major AI model takes obscene amounts of resources.
It helps if you had some of this stuff in college.
The gist of it is that you want to have your data in many dimensions because that allows you to fit your facts in more easily – but multidimensional data starts behaving in some really counterintuitive ways. For example, data points become harder to distinguish for complicated math reasons.
That’s really annoying and people doing AI should be mindful of it or their AI ends up underperforming.
I’m more familiar with the course of dimensionality with respect to clustering (aka looking at a bunch of data points and trying to find groups). Clustering quickly becomes really expensive as your dimensions go up and that can make your entire approach prohibitively expensive.
Note that the article mentioned doing kNN (k-Nearest-Neighbor, a clustering algorithm) in 700 dimensions, which kinda sounds like a very good reason why training a major AI model takes obscene amounts of resources.