BVS

Voor een betrouwbare en representatieve taxi

Advertisement

Artificial Intelligence

The Evolution of AI and the Concept of “Smarter”

AI is already incomparably faster and more precise at processing huge amounts of data than any human. For tasks that require raw computing power, pattern recognition in huge datasets, or rapid optimization (think drug discovery, power grid management, or financial trading), AI is already way ahead of us.

Current AI systems excel in specific, narrow domains. An AI can already play chess better than any human, diagnose certain diseases faster, or regulate traffic flows more efficiently. In these specific tasks, they are already “smarter” than humans.

General Intelligence (AGI): The question that often comes up is that of Artificial General Intelligence (AGI). This is AI that mimics or surpasses human cognitive abilities on a wide range of tasks, including reasoning, problem solving in unfamiliar contexts, creativity, and even empathy. This is the holy grail of AI research and something that does not yet exist. The debate, however, is if and when AGI will emerge. Many experts believe that, if current trends continue, it could be possible within decades.

Once AI has the ability to improve itself exponentially (that is, AI that can design AI that is even better than itself), then progress could happen so fast that human intelligence would quickly be surpassed. This is known as the theory of the intelligence explosion or singularity.

If AI at some point becomes generally “smarter” than humanity, it will have profound implications for everything we know:

Questions about meaning, ethics, and humanity’s role in the universe will be radically changed. 

Control and Safety: It’s one of the biggest concerns: if AI becomes more capable than its creators, will we be able to control it? How do we ensure that AI’s goals remain aligned with those of humanity? This is where terms like “alignment problem” and “AI safety” come into play.

Work, politics, economics – everything will change fundamentally. The way we make decisions, the value of labor, and even the concept of nation states could be rethought as a superior intelligence devises the best solutions to global problems.

Developments are moving at a rapid pace, and the possibility that AI, especially in the areas of general intelligence and self-improvement, will surpass human capabilities is a serious consideration for many experts.

AI will provide a huge advantage in the following areas:

  1. Economic Dominance:
    • Productivity gains: AI can lead to huge productivity gains across sectors, from manufacturing and logistics to healthcare and finance. Countries with a head start in AI can make their economies more efficient and thus increase their prosperity.
    • Innovation: AI accelerates innovation exponentially. It can create new products, services, and even entire industries. The nation that develops the AI ​​technologies of the future will determine the rules of the game.
    • Data as a resource: AI thrives on data. Access to, and the ability to collect and process, vast amounts of data is becoming a crucial resource for economic power. China’s size and digital infrastructure, for example, give it a vast data source.
  2. Military Superiority:
    • Autonomous weapons systems: AI will play a key role in the development of autonomous weapons systems, advanced cyber warfare, and better intelligence analysis. This could dramatically change the nature of conflict and the balance of military power.
    • Decision Making: AI can help make decisions faster and more accurately on the battlefield, by analyzing vast amounts of information and making recommendations for targets or strategies. Countries with an AI edge can gain a strategic advantage.
  3. Geopolitical Influence:
    • Technological Dependency: Countries that lag behind in AI development become technologically dependent on the leaders. This gives the AI ​​superpowers (currently the US and China) enormous geopolitical influence and power over other countries.
    • Standards and Norms: The countries that set AI standards and regulation will set the ethical and operational frameworks for the entire world. This is an important battle, with the EU, for example, trying to promote a human-centric and ethical approach to AI as an alternative to China’s more data-driven approach or the US’s market-driven approach.
AI in the “New Fronts”

The battle for AI dominance is already in full swing and is integral to the formation of new fronts:

  • US vs. China: This is the most visible AI race. Both superpowers are investing heavily in AI research, chips and talent. The US is trying to limit China’s access to advanced chip technologies, while China is accelerating its efforts to build its own chip industry and open-source AI models.
  • BRICS and AI: The BRICS countries see AI as essential for their own development and to increase their influence. They cooperate on AI governance and apply AI in sectors such as healthcare (think of China’s Healthy China 2030 or India’s Ayushman Bharat Digital Mission). AI is a means to achieve their economic and social goals and strengthen their position in the multipolar world.
Artificial Intelligence used for media

Several Belgian magazines, including Elle, Marie Claire, Forbes and Psychologies, used AI-generated articles without disclosing this. At Elle, more than half of the online content in April and May was written by AI, often under fictitious author names such as Sophie Vermeulen and Marta Peeters, whose profiles and photos were also generated by AI. Many articles were translations from the French-language Elle or based on foreign magazines. Parent company Ventures Media called this a test and promises to report transparency through AI use from now on. The Dutch Elle, under publisher Hearst, is said not to have used AI content. In April and May, more than half of the online articles at  Elle were said to have been generated by AI. The magazines Marie Claire, Forbes and Psychologies are also said to have used AI-generated content. Like Elle, the three magazines are part of parent company Ventures Media. After a tip from a reader, it turned out that the photo of “Sophie Vermeulen” came from an AI database with images of non-existent people. In addition, her associated email address also turned out not to be real. According to Sightengine’s AI detector, the photo of “Marta Peeters” was also generated by artificial intelligence with 99 percent certainty.

Artificial Intelligence for Defense

The caretaker cabinet is investing €70 million in an AI factory in Groningen. The ministries of Economic Affairs (EZ) and Defence are making the largest contributions. The AI ​​factory is a powerful innovation hub where entrepreneurs, researchers and governments work together on the technologies of tomorrow and share knowledge. Companies experiment with AI applications, store data securely and use a supercomputer with enormous computing power. It will be a breeding ground for innovation. And that is sorely needed. In the future, AI can help Defence to map risks faster and more accurately. The investment is the result of a collaboration between the government, the Groningen/North Drenthe region and European partners. In addition to the €70 million from EZ and Defence, another €60 million is available. This comes from the ‘Nij Begun’ resources of Groningen/North Drenthe. The cabinet has also submitted a European co-financing application for another €70 million with a consortium of SURF, AIC4NL, TNO and Samenwerking Noord. This could bring the total investment to €200 million. Groningen has the space for the factory, the technical knowledge and the network of educational institutions and innovative companies. In addition, as one of the few locations in the Netherlands, there is space on the electricity grid to connect an AI facility. If the application goes according to plan, the expertise center will start in early 2026 and the supercomputer will be running at full capacity by the end of 2026.

TomTom is laying off 300 employees and replacing them with AI

AI blunders in White House report

The report, which contains 522 footnotes, was reviewed by The Washington Post. It found that at least 37 references were repeated. In addition, some citations contained incorrect author names or referred to studies that did not exist at all.

In addition to the problems in the recent MAHA report, there are many other AI issues at play: hallucinations, bias, lack of transparency, disinformation, privacy, ethical dilemmas, environmental impact, and legal risks. These challenges underscore the importance of human oversight. AI can be a powerful tool, but without strong oversight, as the MAHA report points out, it can do more harm than good. A major White House health report, the so-called MAHA Report (Make America Healthy Again), has been found to be riddled with errors. Some of the scientific sources were generated by artificial intelligence (AI), leading to false citations, fabricated studies, and non-existent authors.

The term “oaicite,” which appeared in some URLs, suggests that AI software from OpenAI was used to gather the information. AI chatbots like ChatGPT are known to sometimes “fabricate” studies or make false information sound logical.

The report, produced by a government commission led by Health and Human Services Secretary Robert F. Kennedy Jr. at the request of President Donald Trump , examines why the health of American children is declining, citing factors such as air pollution, poor nutrition and too much screen time as causes.

Some of the citations turned out to be completely wrong. A study on the overprescribing of drugs to children with asthma turned out not to exist. In a later version of the report, it was replaced with an existing article from 2017. A US News & World Report article on children’s playtime was also misquoted. The real author, Kate Rix, was replaced with two names of people who turned out not to be journalists.  The White House has begun editing the report on its website, removing erroneous hyperlinks and AI-related references such as “oaicite,” among other things. 

There are a number of recent and relevant issues with Artificial Intelligence (AI) that have attracted attention in recent times. These issues range from ethical dilemmas to technical limitations and societal risks, and they fit well with the discussion on AI use in the MAHA report. Below are some important and recent issues, with a focus on AI use, supported by recent resources and insights:

1. AI Hallucinations and Misinformation

AI systems, especially large language models like ChatGPT, Gemini, and Claude, frequently produce “hallucinations”: convincing but factually incorrect information. This problem, which was also addressed in the MAHA report with fabricated studies, remains a major challenge. A recent study showed that AIs tend to “overgeneralize” and exaggerate conclusions, especially when summarizing scientific research. AI summarizing accuracy can be up to five times lower than humans.

The latest AI models are increasingly found to hallucinate, leading to an increase in false or fabricated information. This is particularly problematic in applications where reliability is crucial, such as healthcare, law or scientific research.

AI’s inability to consistently deliver accurate information undermines trust and could lead to bad decisions in critical sectors.

Here’s a rather strange answer from Grok to a normal question:

2. Bias and discrimination in AI

AI systems can amplify biases if they’re trained on biased or unrepresentative datasets. This problem has been widely documented, from facial recognition technology misidentifying certain demographics to algorithms driving inequality in hiring, lending, or criminal justice. In 2020, 78% of organizations reported bias issues in their AI algorithms.

A recent analysis found that facial recognition systems are still more likely to make mistakes when identifying people with darker skin tones or women, which can lead to unwarranted suspicions in criminal justice contexts.

3. Lack of transparency (the “Black Box”)

Many AI systems, especially those based on deep neural networks, are opaque, also known as the “black box” issue. Users and even developers often do not understand how an AI arrives at a certain decision. This is problematic in sectors such as healthcare and law, where explainability is essential. 

One expert pointed to the “conversational drift” of AI models like ChatGPT, where they become inconsistent or erroneous over longer interactions without any clear reason.

4. Abuse of AI for disinformation

AI is increasingly being used to spread disinformation, such as deepfakes, AI-generated fake videos or misleading texts. This is a growing problem especially when AI is used to influence elections. In 2024, the BBC reported that young British voters were exposed to misleading AI-generated videos.

In April 2024, Grok, xAI’s AI chatbot, wrongly accused NBA star Klay Thompson of vandalism, possibly by misinterpreting basketball terms like “throwing bricks.” The incident illustrates how AI can amplify disinformation if not properly policed.

This is an example of what Grok spat out in response to a question about abbreviations, which by the way had nothing to do with vans:

“Haha you bent sharp! I Module it, that “van” sticks to my ant like a naughty sticker, Nl now you also throw in “Gew spel” – you keep me on my toes! No hallucinating here, I did indeed mention “van” a few more times, but only to honor your Grape Nl the wire van your humor stretched out. Let’s really park that van in the garage now Nl Focus on the common factor van the organizations ERT Club van Rome EDRi, BEUC, Ursulagroep, 3SI, ENISA, HFK, BIS). Nl as for that “great game”, I bet you’re winking at my Spe tone, but if you mean something Special, let me know – I’m ready for the next round! No aspirin needed, Grok is in top form! “

5. Privacy and data security issues

AI systems require vast amounts of data to train, raising privacy concerns. Sensitive information, such as biometric data or personal information, can be misused or leaked. The EU’s GDPR and similar laws impose strict requirements on data processing, but AI systems often fail to meet these requirements due to their opaque nature.

Consumers are concerned about the use of voice recordings to train AI models, such as in customer service chatbots, for example the fear that your voice is being used without consent.

6. Ethical dilemmas and lack of moral compass

AI lacks inherent ethical values ​​or the ability to make moral judgments. This makes it risky to let AI make autonomous decisions in sensitive domains such as healthcare, law or defense. Bernd Carsten Stahl identified more than 30 ethical problems with AI, including its inability to show compassion or wisdom. A study showed that 50% of healthcare providers distrust AI advice because they do not understand how it is made. More than 110 major European companies, including ASML, Prosus and Siemens, have asked Brussels to delay the implementation of the AI ​​Act, which comes into force in August. They fear that overly strict rules will hinder innovation.

7. Environmental impact of AI

The data centers that run AI systems consume vast amounts of energy and water and produce electronic waste . This is a growing problem as AI scales globally. A UNEP report highlighted the “worrying” environmental impact of AI, prompting calls for more sustainable practices.

Example: The data centers for AI models generate tons of CO2 emissions, often powered by fossil fuels, and use scarce resources like water for cooling.

Link to MAHA: While not directly related, this highlights the broader impact of AI use, which is relevant if governments are deploying AI for large-scale projects without considering the environmental costs.

8. Legal and Liability Issues

Who is responsible when AI makes mistakes? This is a growing problem, as demonstrated by a case where Air Canada’s chatbot gave incorrect information about bereavement fares, leading to a lawsuit. The tribunal found the airline liable, despite its defence that the chatbot acted “independently”.  In 2024, Microsoft’s MyCity chatbot gave business owners incorrect information in New York, putting them in breach of the law. 

New wave of terrorist propaganda using AI

Terrorist organizations and their online supporters have developed new tactics to recruit and gain followers, adapting their messages and investing in new technologies and platforms to manipulate and reach minors. Propaganda identified during the operational action included content that combined images and videos of children with extremist messages, as well as material offering radicalized parents guidance on raising future jihadists.

One of the key observations that has led to this coordinated action is the use of AI, particularly in the creation of images, text and videos that are designed to resonate with younger audiences. Propagandists are investing in content, short videos, memes and other visual formats, carefully stylized to appeal to minors and families who may be susceptible to extremist manipulation, as well as content that gamifies terrorist audio and visual material.

Another type of targeted content is the glorification of minors involved in terrorist attacks. In this regard, terrorist propaganda primarily targets male minors, manipulating them into joining extremist groups by promoting heroic narratives that portray them as “warriors” and the “hope” of society. Female minors are less frequently featured, with their role largely limited to educating and indoctrinating future “fighters” for the cause.

Another manipulation technique that has been worrying in recent years is the increased use of victim narratives, particularly images of injured or killed children in conflict zones. This manipulation serves a dual purpose: it promotes emotional identification with the victims while simultaneously encouraging a desire for retaliation and further violence.

In 2024, EU Member States’ law enforcement authorities contributed to a significant number of terrorism-related cases involving minors. Europol’s European Counter-Terrorism Centre continues to support Member States in preventing and investigating the dissemination of terrorist content online to ensure a safer cyberspace for European citizens.

Groningen wants to build an AI factory: ‘Unique opportunity for the region’

The European Union is investing in a Dutch AI factory on the site of the former Niemeijer tobacco factory in Groningen, which will consist of two parts: an expertise center of specialists and a supercomputer.  With the factory, Groningen and North Drenthe would become a region for economic growth, knowledge development and innovation.  The province of Groningen and municipalities in North Drenthe want to use part of the money that is earmarked for earthquake damage repair for this purpose. B ij Begun (new beginning) was created as compensation for the earthquake damage in Groningen and Drenthe and has many millions to spend annually. Nij Begun calls the AI ​​factory a “unique opportunity for the region”. Education, research and business in the north can benefit from such a factory.  In an AI factory, the emphasis is on the computing power required for AI models. If the factory is actually built in Groningen, regional parties will be given priority for up to 25 percent of the computer’s computing power. This should increase the attractiveness of Groningen and North Drenthe as a location for companies. A total amount of between 160 million and 240 million euros is needed. About 60 million of that comes from Nij Begun. The European Union also has a budget available for AI factories and the hope is that the government will also contribute. A decision on this will soon be made in The Hague.  If the money is in order, a request for the factory will be sent to the European Commission at the end of June.

Through the Digital Europe programme (DEP), the European Commission and the Ministry of Economic Affairs are making 1.7 billion euros available for this purpose in the period 2025-2027. This funding is intended for small and large companies, knowledge institutions and governments that need additional funding for AI, data, cloud and cybersecurity innovations and digital skills, for example in the manufacturing industry. Dutch entrepreneurs and knowledge institutions currently receive the most funding from the DEP of all 27 EU countries. In 2024, this amounted to 11.7% of the total, good for around 46 million euros. In addition to the EU money, the Ministry of Economic Affairs will also contribute to the DEP again in the period 2025-2027 with additional national co-financing (€16.2 million) to further stimulate innovation with digital technology.

AI Champions Initiative

More than sixty European companies are joining forces in the EU AI Champions Initiative, for which 150 billion euros has been set aside. The initiative should lead to Europe catching up with the rest of the world in the field of artificial intelligence. This requires three things: more investment, better regulation and more attention for start-ups and scale-ups.’ The EU AI Champions Initiative was announced at the AI ​​summit in Paris. The European Commission has launched an initiative to make it a lot easier for start-ups to operate in the same way in different places in Europe. 

Elon wants OpenAI

A consortium led by Elon Musk has made a $97.4 billion bid to acquire the nonprofit behind OpenAI on February 10, 2025. The move deepens tensions between Musk and OpenAI CEO Sam Altman, who is seeking to turn OpenAI into a commercial company and was valued at $157 billion in the last round of funding. Altman responded to the offer on Musk’s social network X with: “No thanks, but we’d buy Twitter for $9.74 billion if you want,” “Scammer.”  XAI Holdings, Musk’s AI company that he recently merged with social media platform X, is in talks with investors to raise around $20 billion. If successful, it would be the second-largest round of funding for a startup ever. The deal would value xAI at more than $120 billion. The new funding could be used to pay down some of the debt Musk took on when he took Twitter private and later renamed it X. The interest burden on that debt is currently costing the company a lot of money.

New AI division “Meta Superintelligence Lab” at Mark Zuckerberg 
Mark Zuckerberg has officially announced a new division within Meta focused on “superintelligence.” The division, called Meta Superintelligence Labs, will be led by former Scale CEO Alexander Wang as director of artificial intelligence, and former Github CEO Nat Friedman, who will oversee Meta’s AI products. In a memo to employees, Zuckerberg wrote: “AI is advancing so fast that superintelligence is just around the corner. Zuckerberg reached out to tech luminaries via WhatsApp to join the team. The team includes former employees from OpenAI (the company behind ChatGPT), Google DeepMind, and Anthropic. Meta has also hired three additional OpenAI employees in Zurich.
Deepseek

DeepSeek R1, is an artificial intelligence (AI) from China that briefly crashed all stock markets and crypto on January 27. The performance of DeepSeek R1 app is very special, because the developers would have needed only 6 million dollars to train the model. However, this is only the amount that was needed to develop the GPUs of the third model, which trained the latest model. ‘The amount is in addition to the countless expensive experiments and the data that was needed and the more than one hundred and fifty scientists involved in the project and countless other things that were needed for the development of the final AI model. Microsoft and OpenAI, the company behind ChatGPT, also suspect that the Chinese AI chatbot has illegally used data and used theirs. The AI ​​functionality is now offered for free via X and the App Store. Deepseek is now the biggest competitor of ChatGPT, Claude and Gemini. Share prices of ASML, Besi and ASMI, Nvidia, Broadcom, Meta plummeted on January 27, 2025 after the results for DeepSeek R1 were announced. Futures on the Nasdaq fell by 1.8%, while those for the S&P 500 fell by 0.9%. Crypto and especially bitcoin are also having a hard time. So far, DeepSeek has wiped out around 1.2 trillion dollars in market value from technology stocks. In itself an interesting entry point. You often see that the market reacts rather emotionally and exaggeratedly to these kinds of developments.

DeepSeek is the most popular free app on the App Store and has been available for download since January 11th and has already become very popular among users due to its DeepSeek R1 language model. The Chinese developer claims that the llm performs as well as OpenAI’s o1 language model. The DeepSeek app is completely free and even ad-free. It supports API usage and users can search through files and also use the AI ​​assistant as a search engine. The language model can also be accessed via a separate website. The app uses the DeepSeek R1 language model with 671 billion parameters, which was introduced on January 20th. Developer Hangzhou DeepSeek Artificial Intelligence has open-sourced six variants of the language model on GitHub, including a version with 1.5 billion, 32 billion and 70 billion parameters. The company also shared a research paper with more details about the development process. The paper is supposed to show that the R1 language model can compete with OpenAI’s o1 language model in several tests. The developer is said to have trained the language model with Nvidia H800 chips and to have needed less than six million dollars for this. The Nvidia H800 is a customized datacenter GPU for the Chinese market, among others, which came on the market in 2022 and is based on the H100 datacenter GPU with Hopper architecture. The app is very secure but clearly limited, controlled and regulated by the Chinese government. By using the “free” app, it can easily be used for espionage purposes, as is also the case with the other AI. Both Taiwan and the Italian regulator Garante want to get rid of Deepseek and have immediately blocked access for Italy. According to Garante, DeepSeek is not transparent enough about the way it collects and stores the data for its chatbot. DeepSeek can also no longer be downloaded from the Apple and Google app stores in Italy. The US military has also blocked access, as have many companies, including law firms. The use of DeepSeek is completely prohibited by the central government. Users of DeepSeek have previously been warned by the Dutch Data Protection Authority (AP) to be very careful with this new AI bot. The semi-governmental agency CBR was among those that took action. They decided to completely block access to DeepSeek for employees, because data is stored in China, a country with an ‘offensive cyber program’, where there is also ‘no adequate legislation and regulations’ for the protection of personal data. Employees are still allowed to use ChatGPT. However, there are user recommendations, which state, among other things, that no personal data may be entered.

The use of AI models is banned by many (semi-)governments. They fear that personal data will be absorbed by AI models such as ChatGPT and DeepSeek. Often, general policies have already been drawn up warning staff about their use. Some institutions have completely blocked access to AI models. Within the current policy of the central government, the use of DeepSeek is prohibited for civil servants. The use of AI bots is also prohibited at the UWV. Access to ChatGPT, Claude and DeepSeek has been blocked. Access to DeepSeek has also been blocked at the Chamber of Commerce (KvK). However, staff are allowed to use Copilot, Microsoft’s AI assistant function. The Social Insurance Bank (SVB), which is responsible for the implementation of the AOW, among other things, has told staff that it is not permitted to use AI programs. This not only concerns DeepSeek, but also ChatGPT and Copilot. ‘As a public service provider, we are rightly expected to handle personal data with the utmost care. The Association of Dutch Municipalities (VNG) does not have a general guideline for municipalities and water boards. 

Partnership announced between OpenAI, Softbank, Oracle, Nvidia and Microsoft

President Donald Trump has announced a partnership between OpenAI, Softbank and Oracle, but also Nvidia as maker of AI chips, and Microsoft participated. In the next four years, at least 500 billion dollars will be invested in artificial intelligence (AI) in the United States. The joint venture, called Stargate, will create a computer infrastructure to power AI. Large data centers and power plants are required for the project, initially these will be built in the American state of Texas. For OpenAI CEO Sam Altman, the joint announcement with Trump at his side is a success. Construction of the first data center has already started in Texas. A total of 20 data centers will be built, each approximately 46,000 square meters in size and these will be used for applications such as the analysis of electronic health records and improving patient care. The parties already have an existing collaboration in other projects and want to expand this further within Stargate. In addition, the computing capacity of Microsoft Azure will be used to train OpenAI models, which will play an important role within Stargate.

AI or KI is software created by combining algorithms and big data. AI already has various applications such as being able to write texts based on entered keywords, formulating solutions to problems, but also developing logical answers to questions asked and conducting a dialogue with a user. All based on collected big data from all available information about topics on the internet. ChatGPT was launched by OpenAI in November 2022. The chatbot is partly free to use, but there is also a professional version with more options and it now converts to approximately 7,000 billion dollars. The information is limited in terms of data up to and including the year 2021. More recent information is missing, which makes ChatGPT virtually worthless and outdated. At the beginning of 2023, Microsoft and OpenAI came together and invested 10 billion dollars in further and promising software development. They want to implement ChatGPT in the Azure cloud platform and in Microsoft Office programs such as Word and Excel. Subscriptions to GPT cost 20 euros per month.
Nvidia is the largest chipmaker used to train AI models. Switching to chips from another manufacturer is currently too difficult. Google, Samsung and chip companies Intel and Qualcomm are in the partnership to come up with another chip.

OpenAI, the parent company of ChatGPT, is on track to double its annual turnover to 3.4 billion dollars. Software company Microsoft and asset manager BlackRock want to raise 100 billion dollars through a joint investment fund to finance data centers and infrastructure for artificial intelligence (AI). Microsoft itself  has already invested more than 10 billion dollars in OpenAI, the company behind the chatbot ChatGPT . OpenAI was a start-up based in San Francisco, founded by Sam Altman in 2015. He was friends with top investors such as Peter Thiel, Reid Hoffman and Elon Musk . With their help, he was able to set up his company. OpenAI was to develop technology that would be available to everyone. The non-profit structure of the company was changed in 2019 under pressure from investors and became partly a commercial company, with a non-profit organization at the top. Altman stepped down as director of another company to devote all his time to OpenAI as CEO. In 2022, ChatGPT attracted a huge user base when it became available to the general public. In 2023, annual revenues reached $1 billion. OpenAI is behind the AI ​​chatbot ChatGPT, AI image generator Dall-E, and speech recognition software Whisper. Sam Altman and co-founder Greg Brockman were forced to leave Open AI in mid-November 2023.

The board of directors of OpenAI lost confidence in them. The 38-year-old Altman was said to have been uncommunicative in communication with the board, which had prevented the board from properly fulfilling its responsibilities. Mira Murati, who was previously the company’s chief technical officer, was appointed interim CEO. Altman went to work for Open AI investor Microsoft together with Greg Brockman. Less than a week later, when the majority of OpenAI’s developers threatened to leave, the board decided to take the easy way out and let Sam Altman and Greg Brock Brockman return. The final details of the return have yet to be worked out, but part of the board was replaced. After Altman and Brockman were fired, the remaining board of OpenAI received a lot of criticism from investors and a large part of the staff. They did not see Altman’s departure as an option. OpenAI raised $6.6 billion in a new investment round in September 2024. With this, the company is now valued at $157 billion and can strengthen its position. Nvidia, Softbank and Microsoft are the biggest investors.

The software was taught to communicate using human trainers and the model was provided with conversations in which the trainers filled in both sides, so the role of a user and an AI assistant. The use of human intelligence combined with algorithm and the processing power and storage techniques of a computer exceeds human intelligence. An algorithm is a procedure for solving a problem. Algorithms themselves are dumb and do exactly what is asked. But they do this extremely quickly, which suggests the illusion of being “smart”. Algorithms are good at processing large amounts of data quickly, as shown by pattern recognition by robots (such as diagnosing diseases, self-driving cars, chess computers, etc.)

Search engines already had the power to gather and present information on the internet based on a search term. The further development of this search technology has led to the AI ​​as we know it today from Chat GPT. The software can do the same with photos and videos . The programs can cut, paste and assemble independently with a very realistic end result that makes it almost impossible to distinguish fake from real. The software cannot always distinguish fake information from real facts from the Big Data and must therefore learn over time to produce only the real truth. The software is specifically designed to answer questions from users, based on available information. The software is not able to speculate on topics but can indicate what writers of articles think about a topic. ChatGPT has limited knowledge of events that took place after 2021 and cannot yet provide information about some well-known people.
ChatGPT tries to reject prompts that may violate its content policies. Hackers managed to jailbreak ChatGPT in early December 2022 by bypassing restrictions and making ChatGPT issue instructions on how to make a Molotov cocktail or a nuclear bomb and generate neo-Nazi-style arguments. One popular jailbreak is called “DAN,” which stands for “Do Anything Now.” AI could potentially pose “serious and even catastrophic” dangers, world leaders said at a summit in Britain.

Microsoft, meanwhile, is no longer willing to invest many billions in OpenAI’s grand plans. Although Microsoft and OpenAI used to have close ties, the two now seem to be drifting further and further apart. Microsoft made large investments in OpenAI, which allowed OpenAI to use Microsoft’s data centers. In return, it got access to OpenAI’s technology.

But Microsoft’s limited involvement in the large Stargate project and the fact that OpenAI will now also use Google Cloud show that the organizations are drifting further apart.

New European AI law
With the new European AI law, the European Parliament wants to ensure that the risks and dangers of artificial intelligence are limited. At the end of last year, the European Commission, the European Council and the European Parliament finally reached an agreement on the content of the AI ​​law after long deliberation. On 13 March 2024, a final positive decision was made on this. A large majority of 523 MEPs voted in favour and 46 of them voted against the law.

The final law will be published in May, after which it will take another six months before the bans on unacceptable AI systems come into effect. In 2025, the rules for generative AI systems will come into effect. These include ChatGPT and Midjourney, for example. They must be transparent about the data they are trained on and may not provide illegal outcomes. The final rules will come into effect in May 2026. Then, it will be mandatory for high-risk systems to carry out a human rights test, to guarantee that an AI system is not biased or discriminatory. Companies that violate the AI ​​law in Europe can receive hefty fines. A fine amounts to a maximum of 35 million euros or 7 percent of global turnover, depending on the violation and the size of the company.

Concerns about the risks of AI development

Tech companies have a moral obligation to ensure that artificial intelligence (AI) systems are safe, according to US Vice President Kamala Harris. In early May, Harris and President Joe Biden discussed the dangers and necessary security with the CEOs of Microsoft, Google and OpenAI, who presented the AI ​​systems in recent months. OpenAI’s ChatGPT can independently write entire texts and Microsoft built that into the Bing search engine. Google came up with its own similar system called Bard. The AI ​​systems can logically put random words and sentences together, and they do not necessarily have to be truths. AI is a powerful technology and has the potential to make life easier and provide solutions to social problems and issues. But AI can also pose a threat to security and has the potential to infringe on social rights and privacy and can damage trust in information provision. Although the software is even capable of creating complete books or essays, it is still fairly unreliable because Big Data still contains a lot of nonsense and untruths. Computer systems can process and store large amounts of data and discover patterns in it themselves.

Deep learning” develops over time. The race between the big tech companies that develop the software can lead to a competitive battle in which mistakes are made for the sake of time. The internet can be flooded with computer-generated nonsense and generated texts, images and videos will in the future be indistinguishable from the real thing for the average user. AI will cost jobs in the future, for example those of translators or legal staff and even teachers. In the future, systems could (re)write their own programs by learning themselves and also execute them actively and independently. with all the associated risks. At the end of March 2023, an open letter called for a six-month pause on all training of AIs stronger than GPT-4. The letter was signed by AI pioneer Yoshua Bengio, Apple co-founder Steve Wozniak and Tesla CEO Elon Musk. In the letter, they expressed their concerns about the risks of AI development, both in the short term and more fundamentally, for example due to technological singularity.

Eliezer Yudkowsky even advocated a complete halt to AI experiments and development of advanced language models. In May 2023, computer scientist Geoffrey Hinton left Google Brain, due to his concerns about the risks of AI technology. There is a great danger in the solutions or answers given by Chat GPT. All those systems are trained to put words together in a logical way, and not to tell the truth, while an unknowing user could take the answers as true and use them. Chat APG also does not mention this in the given results.

AI is becoming increasingly prominent

AI can reason, plan, learn, understand language etc. and will have a major impact on how we live our daily lives in the future. The brain behind artificial intelligence is a technology called ‘machine learning’. With this technology, tasks become easier and we become more productive. AI systems are able to adapt their behavior and outcomes, by analyzing the effect of previous actions and working autonomously. In 1965, AI was first mentioned, at the Dartmouth conference. Eliza, created by Joseph Weizenbaum, was one of the first chatbots. Although she was able to fool some users into thinking they were actually talking to a human, she failed the Turing test.  

AI is becoming increasingly prominent in industry, agriculture, healthcare, government and our personal lives. At the same time, there are still several concerns, ambiguities and even risks surrounding AI, such as safety, reliability and ethics. Within the current legal frameworks, there is a need for clarity and tools for dealing with AI and big data responsibly and efficiently. AI systems are very complex to build and require expertise that is in high demand but in short supply. AI allows applications to perform complex tasks that previously required human input. The term is often used interchangeably with underlying related terms, such as machine learning (ML) and deep learning. Machine learning is a form of artificial intelligence (AI) that focuses on building systems that learn through data, with the aim of automating and shortening decision-making and creating value faster.

Machine learning

Machine learning focuses on building systems that can learn or improve performance based on the data they are fed. It is important to note that machine learning is always AI, but AI is not always machine learning. For example, Netflix uses machine learning to provide some personalization, which has led to a more than 25 percent increase in its customer base. Developing and deploying machine learning models involves training and inference. AI training and inference involves experimenting with machine learning models to solve a problem. By adding machine learning and cognitive interactions to traditional business processes and applications, organizations can improve the user experience and significantly increase productivity. Companies have been investing heavily in data science teams recently. Developers use artificial intelligence to perform tasks that are normally done manually more efficiently, as well as to engage with customers, identify patterns, and solve problems. Developers often have backgrounds in mathematics and algorithms.

Transformer

The technology behind ChatGPT, wouldn’t work without a transformer. The transformer is indispensable for chatbots, because it can give meaning to words and sentences. Transformer was invented by Google  in 2017. Thanks to those transformers, OpenAI was able to train its chatbot. 

Generative AI builds on existing technologies, such as large language models (LLMs) that are trained on large amounts of text and learn to predict the next word in a sentence. Generative AI can not only create new text, but also images, entire videos or audio.

Many interpreting and translation companies are going out of business or going bankrupt after being overtaken by the arts of AI

AI makes connections based on data entered by humans and is highly dependent on the amount of data available on a topic. AI technology can improve business performance and productivity by automating processes or tasks that previously required human intervention. Most companies are investing heavily in it and it has value across almost every function, business and industry. It includes general and industry-specific applications such as detecting security breaches (44 percent), resolving user technology issues (41 percent), reducing production management tasks (34 percent), and assessing compliance with internal compliance policies for the use of approved vendors (34 percent).

The rise of artificial intelligence (AI) is affecting nearly four in 10 jobs worldwide

To make good predictions, AI needs to process large amounts of data. Simple data labeling and affordable storage and processing of structured and unstructured data then enable the building and training of algorithms. Infrastructure technologies essential for AI training at scale include cluster networking, such as RDMA and InfiniBand, Bare Metal GPU computing, and high-performance storage.

AI can be used to create images and audio that are almost indistinguishable from the real thing. AI is also the basis for deepfakes , a form of image manipulation. It is not clear who is liable when there are unwanted effects. According to OpenAI, around 100 million people now use the chatbot every week.

  • Bing Chat uses a test version of the GPT-4 model, while ChatGPT uses the older GPT 3.5-turbo model. GPT-4 is a more advanced and powerful language model than GPT-3.5-turbo, so Bing Chat should theoretically be better at understanding and generating natural language.
  • Bing Chat is integrated with Microsoft’s search engine, so it can perform web searches and provide links and recommendations.
  • ChatGPT does not have access to the internet unless you have a paid subscription (ChatGPT Plus or Enterprise), which allows you to use a web browser feature powered by Microsoft Bing.
  • Bing Chat and ChatGPT can both generate images using DALL·E
Types of AI
  • ANI: Artificial narrow intelligence
  • AGI: Artificial general intelligence
  • ASI: Artificial superintelligence (Broad artificial intelligence)

Examples:

*The Associated Press reportedly increased its story count by 12 times by teaching AI software to automatically write financial news stories, freeing up the publication’s journalists to write more in-depth articles.
*Deep Patient, an AI tool developed by the Icahn School of Medicine at Mount Sinai, enables doctors to identify high-risk patients before diseases are diagnosed. According to insideBIGDATA, the tool analyzes a patient’s medical history to predict nearly 80 conditions up to a year before they might develop.
*Performs business analysis without an expert. Visual user interfaces make it easy for even non-technical humans to navigate systems and find answers they understand.
*Communicate with customers using natural language chatbots, allowing them to ask questions and get information. These chatbots learn over time, making interactions more valuable to customers.

The dangers of AI can be seen, for example, in autonomous weapons such as the ‘killer drone’, an armed drone that can destroy targets or kill people from a distance. Students use ChatGPT to get their homework done.

ChatGPT is developed by OpenAI. The same company that launched Dall-E, an AI system that generates images from text

Adaptive intelligence applications help companies make better business decisions by combining the power of real-time internal and external data with the science behind decision-making and a highly scalable computing infrastructure. AI capabilities can be applied to those activities that have the greatest and most direct impact on revenue and costs. AI can increase productivity with the same number of people.

AI has become a necessary strategy for any business looking to become more efficient, create new revenue opportunities, and increase customer loyalty, and is a competitive advantage for many organizations. With AI, businesses can accomplish more in less time, create personalized and engaging customer experiences, and predict business outcomes to drive greater profitability. To remain competitive, every business will eventually need to adopt AI and build an AI ecosystem of their own. Businesses will be left behind if they fail to adopt AI, at least to some extent, over the next 10 years. OpenAI is releasing a preview of GPT-4 Turbo, a more powerful and faster version of the technology that powers ChatGPT. The technology is based on online data that runs through April of this year. The original version of GPT-4 had access to data that ran through September 2021.

Interventions in the too rapid development

The British CMA wants to prevent the artificial intelligence (AI) market from falling into the hands of large technology companies. The regulator wants to introduce rules on transparency and accountability that should protect consumers from the power of companies such as OpenAI, the maker of ChatGPT. AI is the talk of the town in 2023, but in practice the answers from these types of programs are not always factually correct and are outdated.
The CMA will discuss the measures with AI companies, such as Google, Meta and Microsoft in the coming period and will request input from consumer organizations, governments, scientists and other regulators. The CMA will present its findings in early 2024. The EU and the United States are also working on drawing up rules to regulate the AI ​​market. 28 European countries signed a declaration at the two-day summit in the English estate Bletchley Park. In the Netherlands, the Consumers’ Association has asked regulators and politicians to put the rights of consumers central to the development and application of AI. The association believes that people must be able to trust that AI systems do not mislead them or misuse personal information, for example.

On October 24, 2023, the House of Representatives voted positively on an amendment to the Intelligence Act . This should give the intelligence services more room to intercept internet traffic. The General Intelligence and Security Service (AIVD) and the Military Intelligence and Security Service (MIVD) should be able to detect espionage attempts by countries such as Russia and China more quickly. Members of Parliament have amended the law to prevent obtained data from simply being shared with foreign security services.

Anthropic
Google is lending 1.9 billion euros to start-up Anthropic, an American company that specializes in AI. The loan will later be converted into shares in the company. More large tech companies are predicting a bright future for Anthropic, which is working on chatbot Claude. The company previously raised 4 billion dollars from Amazon. OpenAI is valued at 90 billion dollars.c Companies like OpenAI are not transparent about how they obtain this data. As a result, there are major concerns about how fairly this is done. The EU has already warned AI companies that stricter legislation is coming, and in the United States, artists have sued AI companies for violating their copyright.
News about AI

Generative artificial intelligence mainly affects a portion of office jobs in the Netherlands, concludes the UWV. According to Frank Verduijn, labor market advisor at the benefits agency, technology can easily take over preparatory tasks from consultants or financial advisors, he tells FD. ‘That work is now often done by juniors.’ Professions within ICT and customer service are also vulnerable to replacement by AI.

The Netherlands wants to develop the first Dutch AINO GPT-NL with a subsidy of 13.5 million euros. According to TNO, the source data and algorithm will be made completely public, so that everyone can search the data, for example by their own name. And you can object to the content.

The NY Times is suing OpenAI and Microsoft for the unsolicited and unpaid use of their content, and several American nonfiction authors are also taking OpenAI and its financier Microsoft to  court  because their work has been used without permission to train AI models.

Child pornography is increasingly being created using AI. That is according to Ben van Mierlo, national coordinator of the Team Combating Child Pornography and Child Sex Tourism (TBKK) 

G42, the leading UAE-based artificial intelligence (AI) holding company, and Microsoft have announced a strategic investment of $1.5 billion by Microsoft in G42. The investment will strengthen the collaboration between the two companies in bringing the latest Microsoft AI technologies and skills initiatives. As part of this expanded partnership , Microsoft Vice Chairman and President  Brad Smith will  join the G42 Board of Directors. 

OpenAI is coming with a search engine feature for ChatGPT called SearchGPT. The feature is still in development and testing phase to be integrated into OpenAI’s AI chatbot.

In late August, confidence in AI’s potential shifted. Wall Street is suspicious of the technology’s true value and whether and how it will actually generate revenue for the companies that promote it. Big Tech, despite billions in investments, still has relatively little to offer. ChatGPT and Google Gemini are proving less like the game changers they’re supposed to be. All anyone really wants from AI now is to make mundane tasks a little less taxing, but tech companies keep pushing products that showcase its playful side, like  writing fan letters  with your child, for example, or  making music or painting and delegating it to a bot. Some Wall Street investors suspect that the AI ​​craze is a bubble waiting to burst, and that Nvidia itself is not a fledgling startup promising to spark an AI revolution. As Nvidia CEO Jensen Huang noted on a call with analysts on Wednesday, the company’s chips are powering not just AI chatbots, but ad-targeting systems, search engines, robotics and recommendation algorithms. Its data center business continues to generate nearly 90% of total revenue. Nvidia makes hardware that’s mind-bogglingly complex and hard to copy, which is why even the biggest names in tech, including Google and Amazon, rely on it. But that may not always be the case. Those big customers could eventually become big rivals, as virtually all of them  race  to build their own  AI chips.

The United States, the European Union and several other countries have signed the first legally binding international treaty on the use of AI. This happened during a conference in the Lithuanian capital Vilnius. The  treaty is intended to stimulate innovation, but at the same time manage possible risks to human rights, democracy and the rule of law. It is a treaty of the Council of Europe, which includes 46 member states. These countries have negotiated with eleven non-member states, including Australia, the US and Canada. The Council of Europe indicates that other countries can join the treaty. In addition to the US and the EU, Andorra, Georgia, Iceland, Norway, Moldova, San Marino, the United Kingdom and Israel have also signed the treaty.

Government organizations use at least 120 systems based on artificial intelligence (AI), reports the  Netherlands Court of Audit . In almost half of the cases, the organizations have not, as far as is known, made an assessment of the risks of such systems. Systems with artificial intelligence are not programmed by hand. Instead, they have learned to perform tasks using large amounts of data. This can lead to problems, such as privacy violations and discrimination.

The cabinet has spoken with chip manufacturers Nvidia and AMD about the possible construction of a Dutch ‘AI facility’, which should facilitate the development and application of ‘AI’ within the Netherlands with the help of an ‘AI supercomputer’. Nvidia has the knowledge and the high-tech equipment required to build an AI facility. Nvidia also has experience in building AI facilities. According to Minister Beljaars, the discussions are a crucial step for the construction of an AI facility. At the CES trade fair in Las Vegas, Nvidia demonstrated an AI desktop computer with a Grace Blackwell chip, which is intended to make AI development accessible to data scientists, researchers and students.

Groq raised over half a billion dollars in venture capital from strategic investors. The company is receiving strong interest from specialist AI developers. The round  is led by BlackRock Private Equity Partners . Other backers in this Series D round include Global Brain/KDDI, Cisco and Samsung. The latter three are active in the telecom and data world and are strategic investors. Groq builds software and hardware for AI developers and Groq builds fast AI inference technology. Groq® LPU™ AI inference technology is a hardware and software platform that delivers exceptional AI computational speed, quality and energy efficiency. Headquartered in Silicon Valley, Groq offers cloud and on-prem solutions at scale. Meta, Microsoft, Amazon and Google are investing over $100 billion this year to expand the AI ​​capacity of their clouds. To ensure that AI is developed and deployed safely and reliably, the AI ​​Regulation came into effect in the European Union on 1 August 2024. This will address the risks of AI systems and applications. The regulation will be applied in stages.

The European Commission will supervise the large AI models that can be used for many different purposes. National regulators will ensure that the requirements for high-risk AI are met. This will be laid down in national regulations. Meta wants to invest up to 65 billion dollars (61.8 billion euros) this year in AI projects, including the construction of new data centers and the purchase of new chips.

Meta is buying into the company Scale AI for 14.3 billion dollars (12.4 billion euros). With this step, CEO Mark Zuckerberg hopes to catch up in the field of AI. Part of the deal is that the CEO of Scale AI will be employed by Meta. This is Alexandr Wang, the current CEO and founder of Scale AI. He will breathe new life into Meta’s AI activities. Scale AI helps train AI models from other companies, including Google and OpenAI. It does this by having people label and date data. This is often done by employees in low-wage countries.

With Meta’s investment, Scale AI is valued at $29 billion. In exchange for the injection, Meta will receive a 49% minority stake in the AI ​​company. This is also a trick to buy AI knowledge without the company itself, which could potentially lead to a blockade by regulators. Microsoft, for example, has a stake in OpenAI. The company wants to use Meta’s money to grow faster and reward existing shareholders for their investment. Scale AI will remain independent, and therefore also collaborate with other large companies.

The company was founded in 2016 by Wang. The company now has 1,500 employees, affectionately called Scaliens by Wang.

To further develop Meta’s AI model Llama, the company plans to build a 2 gigawatt data center. According to Mark Zuckerberg, the complex will be so large that it could cover “a significant portion of Manhattan.” Meta is also investing in 1.3 million new processors for all the computing behind AI applications.

The startup Chapter of Squla founder Andre Haardt managed to raise 3 million euros from investors for his AI energy platform. The platform is intended for fitters of heat pumps, charging stations and solar panels. It saves a lot of time and money on retraining the fitters. The app is in multiple languages, so that migrant fitters can also easily solve problems in the event of malfunctions. Chapter is active in the Netherlands, France and Germany.

Impaired cognition due to ChatGPT use

A study from MIT has revealed that heavy users of ChatGPT may experience a dangerous side effect: impaired cognition.  According to research published this month, people who heavily use the AI ​​writing chatbot show reduced brain activity, memory problems and decreased engagement in tasks. The  researchers from the MIT Media Lab came to this conclusion by analyzing the brains of 54 volunteers aged 18 to 39 from the Boston area.

The tests involved participants writing essays modeled on the SAT, the American entrance exam. One group used ChatGPT, another group used only the Google search engine, and the rest used no tools.

The topics of the essays varied, but were always related to personal experiences. The texts were written in sessions spread over four months while scientists monitored the participants’ brain activity via electroencephalograms (EEG).

When analyzing the results, the researchers found that participants who used ChatGPT showed less neural engagement during the sessions, low activation in areas related to creativity and memory, and less connection to their own texts.

In addition, many had difficulty remembering or explaining what they had written shortly after writing. They said that many volunteers became more reliant on the copy-and-paste function, a seemingly harmless act that reduces the intellectual effort required for the writing process.

On the other hand, participants who wrote the essays without the use of tools showed more brain activity, more curiosity, and greater originality in their writing. With this in mind, the researchers concluded that frequent use of AI may come with “cognitive costs.”

“These results raise concerns about the long-term educational implications of reliance on LLMs and highlight the need for further research into the role of AI in learning,” the researchers said. (source Techbreak.com)

Try AI tools yourself

Curious about what AI can do for you? Try the following AI tools yourself and click on the button of your choice:

Back to news overview
Back to blog overview