- cross-posted to:
- canada@lemmy.ca
- cross-posted to:
- canada@lemmy.ca
In August, Solomon announced the government had signed an agreement with Cohere to identify where “AI tools can improve public services.”
Cohere’s reported connection to the U.S. AI firm Palantir increases the alarm. Led by MAGA funder Peter Thiel, Palantir sees the Canadian company’s models being deployed to Palantir customers, possibly including U.S. defence and intelligence agencies.


Was rhetorical, but sure OK, let’s do this:
Edit: not to mention that pretty much every academic on there has a vested interest in getting public funding for their work.
You’ve listed 13 that are on the industry side, including one who bridges academia and commercialization. There’s 11-12 who fall across civil society, academia and research. That doesn’t seem wildly unbalanced to me, but nobody is saying it’s perfect so feel free to suggest how you think it would be better structured and what categories you would look to form it around.
As I also alluded to in my edit, most of the “academics” are people developing AI, rather than analysing it from different perspectives.
Philosophers of technology and/or science, academics in the humanities such as philosophers, or people who work in the theory of education, labour economists, civil rights groups and others working on understanding systemic oppression and bias, authors and musicians, to name a few of the types of folks who should be in the room when our government attempts to remake society in the tech-bro image.
Edit: also, like, saying “only half of this team are part of the industry that this panel is supposed to create a regulatory framework for” is kinda wild to me. Especially given how disruptive folks like Carney & Solomon claim this tech is. You’d think we would want like 90% advocacy and civil society groups discussing the complete upheaval of our social systems rather than literally half the people being the dead-eyed freaks trying to make billions for themselves before the planet burns to a crisp
If Canada had a national strategy group on achieving leadership in the arts, would you say 90% of members must be from outside the arts and not even experts on the arts who receive any public funding? What would that actually achieve?
This is a strategy group on making plans for how to achieve Canadian leadership in AI. The whole purpose of it is to provide an urgent response to a lack of industrial strategy in a rapidly growing and emerging space of critical importance. They have an objective to provide an industrial strategy document. If you don’t have voices at the table who are engaged in industry, there will be no point in even forming a group because it will never achieve the goal. Nonetheless, it still has substantial civil society representation and open consultation. You didn’t like the questions in the survey? They provided an email address to receive open-ended responses where you could send whatever feedback you wanted.
Also, government is not just one group.
For long-term AI guidance with annual reports, the government also has the Advisory Council on AI. It has a mandate to ensure AI development in alignment with Canadian values. Its mandate was also expanded this year. https://ised-isde.canada.ca/site/advisory-council-artificial-intelligence/en
And, there is the Safe and Secure AI Advisory Group that is focused on guiding policy wrt risks from AI. https://ised-isde.canada.ca/site/advisory-council-artificial-intelligence/en/safe-and-secure-ai-advisory-group
Still, none of these are passing legislation or allocating funds.
Government is not a monolith, and Canada is taking a layered approach to AI strategy, one layer of which is industrial policy. And, if Canadians don’t like the strategic guidance produced by any of these groups, they can pressure their representatives to shape the actual legislation around them.
Out of curiosity, what is the actual grounding of your beliefs about AI and AI policy? There is plenty to be concerned about, but your responses are also full of hyperbole. What are you basing them on?
First off, I would love to see that happen. But this question misses the point. Would “leadership in the arts” have a massive impact on tech policy, in the way that “leadership on AI” is likely to impact the arts?
Right, this is the problem - nowhere, to paraphrase Jurassic Park, are they asking “should we do this”, and instead they’re only asking “how can we do this”. If the discussion of “should” is off the table, then there is no point in me continuing this conversation here.
The entire survey was open-ended responses - well, other than a (pretty generous) character limit on the input fields.
There has been loads of pushback. I have yet to see this government budge.
What is the “grounding” of any belief about anything? That’s a much more interesting question, one that AI boosters would do well to think more deeply about.
Okay… wow. I even pointed you to two government groups working on other sides of the issue, but you’re just ignoring the overall government approach.
The government approach isn’t perfect, but I don’t have interest in arguing with someone focused on establishing an ideological position, going back to hyperbole again and again, and responding to a reasonable question with stuff stuff like this:
We can just leave it as agreeing to disagree. No point wasting anyone’s time.
Um, I’m not ignoring it, it’s simply that the “overall government approach” has been clearly spelled out by the Minister of AI, who has said he will not “over-index on regulation”.
That’s why we haven’t had consultations on any other aspect of AI, only how we can help the industry make money.
As for your question about the grounding of my “belief” about AI - what kind of answer were you expecting, or would you not have acted dismissively toward?