paserbyp: (Default)


Microsoft (MSFT) on Tuesday said it is laying off 3% of its employees, which equals an estimated 6,000 positions.

This move will reportedly include a cut in the number of middle managers in the company, though it will affect “all levels, teams. The company is “aiming to reduce management layers.” It reported that the current layoffs are not tied to performance, unlike the round of layoffs in January.

A second goal, according to another report, is to increase the ratio of coders versus non-coders on projects.

Anthropic CEO Dario Amodei has predicted that AI will be doing all coding tasks by next year—but an existential crisis is already hitting some software engineers. One man who lost his job last year has had to turn to living in an RV trailer, DoorDashing and selling his household items on eBay to make ends meet, as his once $150k salary has turned to dust.

Tech layoffs are nothing new for Shawn(https://www.linkedin.com/in/shawnfromportland).

The software engineer first lost his job after the 2008 financial crisis and then again during the pandemic, but on both occasions, he was back on his feet just a few months later.

However, when Shawn was given the pink slip last April he quickly realized this time was different: AI’s revolution of the tech industry was playing out right in front of him.

Despite having two decades of experience and a computer science degree, he’s landed fewer than 10 interviews from the 800 applications he’s sent out. Worse yet, some of those few interviews have been with an AI agent instead of a human.

“I feel super invisible,” Shawn tells... “I feel unseen. I feel like I'm filtered out before a human is even in the chain.”

And while fears about AI replacing jobs have been around for years, the 42-year-old thinks his experience is only likely the beginning of a “social and economic disaster tidal wave.”

The Great Displacement is already well underway,” he recently wrote on his Substack(https://shawnfromportland.substack.com/p/the-great-displacement-is-already).

Shawn’s last job was working at a company focused on the metaverse—an area that was predicted to be the next great thing, only to be overshadowed in part by the rise of ChatGPT.

Now living in a small RV trailer in central New York with no lead on a new tech job, Shawn’s had to turn to creative strategies to make ends meet, and try to replace a fraction of his former $150,000 salary.

In between searching incessantly for new jobs, checking his empty email inbox, and researching the latest AI news, he delivers DoorDash orders, like Buffalo Wild Wings to a local Holiday Inn, and sells random household items on eBay, like an old laptop. In total, it only adds to a few hundred bucks.

He’s also considered going back to school for a tech certificate—or even to obtain his CDL trucking license—but both were scratched off his list due to their hefty financial barrier to entry.

Shown’s reality may shock some, considering that the U.S. Bureau of Labor Statistics has consistently labeled software engineering as one of the fastest growing fields, but stories like his may soon become all more common.

Earlier this year, the CEO of Anthropic Dario Amodei predicted that more software jobs will soon go by the wayside. By September, he said AI will be writing 90% of the code; moreover, “in 12 months, we may be in a world where AI is writing essentially all of the code,” he tells the Council on Foreign Relations.

In 2024, over 150,000 tech workers lost their jobs, and so far in 2025, that number has reached over 50,000, according to Layoffs.fyi.

“It’s coming for basically everyone in due time, and we are already overdue for proposing any real solution in society to heading off the worst of these effects,” Shawn wrote.

“The discussion of AI job replacement in the mainstream is still viewed as something coming in the vague future rather than something that’s already underway.”

Despite being unemployed for over a year, Shawn still hasn’t lost hope, nor is he necessarily mad at AI for replacing him and still calls himself an “AI maximalist.”

"If AI really legitimately can do a better job than me, I'm not gonna sit here and feel bad about, oh, it replaced me and it doesn't have the human touch,” Shawn says.

What’s frustrating, he adds, is that companies are using AI to save money by cutting talent—rather than leveraging its power and embracing cyborg workers.

“I think there's this problem where people are stuck in the old world business mindset of, well, if I can do the same work that 10 developers were doing with one developer, let's just cut the developer team instead of saying, oh, well, we've got a 10 developer team, let's do 1,000x the work that we were doing before,” Shawn says.
paserbyp: (Default)
Elon Musk’s Colossus AI infrastructure, said to be one of the most powerful AI computing clusters in the world, has just reached full operational capacity. Designed to push the boundaries of AI, this massive computing system now consists of 200,000 GPUs, all running on Tesla Megapack batteries. This is a significant milestone in Musk’s growing push into AI.

With the on-site substation going online and connecting to the main power grid, phase 1 of Colossus AI infrastructure, located in Memphis, TN, is now complete. The supercomputer is now running at 150 MW from the grid, according to the Greater Memphis Chamber. The additional 150-megawatt Megapack battery system will act as a backup power source, ensuring continued operation during outages or periods of heightened electricity demand.

Colossus AI is the flagship product of Musk’s official AI company, xAI. The supercomputer was first activated in July last year with 100,000 Nvidia GPUs, after being built at an astonishing pace. The entire project was completed in 122 days, while the hardware installation to training phase took only 19 days. The pace of the project impressed Nvidia CEO Jensen Huang, who pointed out that projects of this scale typically take around four years, making its deployment remarkably fast.

“As far as I know, there’s only one person in the world who could do that,” said Huang. “Elon is singular in his understanding of engineering and construction and large systems and marshaling resources; it’s just unbelievable.”

However, the speed came at a cost, as the facility initially lacked a direct connection to the power grid. To keep operations running, the site depended on natural gas turbine generators for electricity, raising concerns about emissions and sustainability.

Early reports suggested 14 turbines were supplying power, each generating 2.5 MW, but observations from residents indicated the number may have exceeded 35 in the surrounding area. That is more than twice the permitted limit. This reliance on temporary power sources had sparked discussions about the long-term energy plan for the facility, especially as xAI looks to scale up operations further.

Adding more GPUs to the infrastructure means that the AI cluster can now rely more on grid power rather than gas-powered generators. This will help improve efficiency and address environmental concerns. Reportedly, xAI plans to remove half the temporary generators by the end of the summer. The other half of the temporary generators will have to remain to deliver the electrical needs of the second phase of the Memphis Supercluster.

Musk plans to double the capacity of Colossus AI before the end of this year. Another 150 MW is going to be added, taking the total capacity to 300 MW. This translates to powering 300,000 homes. It’s not surprising that this massive power demand has sparked concerns about whether the Tennessee Valley Authority (TVA) has sufficient capacity to support it.

xAI has publicly stated plans to expand its Colossus supercomputer to over 1 million GPUs.For the local economy, Colossus AI promises economic development and infrastructure investment. However, concerns persist regarding disruptions to power quality for residents and the project’s environmental impact.

“You don’t become the moniker for technological innovation because someone comes in and exploits your natural resources, your water, exploits the loopholes that allow them to pollute the air,” said KeShaun Pearson, the director of the grassroots organization Memphis Community Against Pollution (MCAP). “That’s not what makes you a technological city. That spin is dangerous because it opens our city up for exploitation even further.”

The road to powering a million GPUs started when Musk founded xAI in July 2023. The stated goal of “understanding the true nature of the universe.” In more practical terms, Musk wanted an AI lab under his own direction, free from the influences of Microsoft, Google, or other major tech firms.

The company is an answer to the growing dominance of OpenAI (which now has Microsoft as a close partner) and Google’s DeepMind. xAI is also integrated with Musk’s other ventures, including SpaceX and Tesla. With Colossus now operating at full capacity, xAI is positioned to accelerate the development and deployment of AI across Musk’s broader ecosystem.
paserbyp: (Default)
India has launched military strikes against Pakistan, putting the two nuclear-armed neighbours on the brink of an all-out war.

The flare-up means that two of the region’s largest militaries are again in face-to-face conflict.

The stand-off pits India, a global defence giant, against a country that may be much smaller, but is nevertheless heavily militarised and has dedicated a significant share of its resources to preparing for war.

As the world’s most populous nation, India has one of the largest militaries, numbering around 1.4 million active service personnel, which include 1.2 million in the army, 60,000 in the navy and 127,000 in the air force. India also has 1.6 million-strong paramilitary forces and a reserve of 1.1 million.

The country is a defence expenditure heavyweight. Its defence spend reached £58 billion ($77.4 billion) in 2024, the second-highest outlay in Asia after China.

Meanwhile, Pakistan’s population is a fifth of the size and the country has been mired in an economic crisis for years.

Last year, Pakistan’s defence budget was estimated to have been a 10th of that of its eastern neighbour.

Pakistan has become heavily militarised to fend off Indian control, which has come at great cost to its democracy.

The military exerts significant control over the civilian government, with Gen Syed Asim Munir, the head of the army, widely seen as the most powerful man in the country.

While India’s military is increasingly deployed to face China, Pakistan has built up a defence posture and doctrine revolving almost entirely around India.

Pakistan fields a total of around 650,000 active service personnel, including 560,000 in the army, 23,800 in the navy and 70,000 in the air force. It also has 280,000-strong paramilitary forces, according to the International Institute for Strategic Studies.

In limited exchanges, such as those seen in the past 24 hours, Pakistan can punch above its weight, though analysts say that Delhi’s numerical and economic superiority could come to bear very quickly in a full-blown war.

Pakistan has leaned heavily in recent years towards China for its arms, shifting away from more costly Western suppliers.

India has significant quantities of equipment from Russia, but has begun buying more from France and America.

On the battlefield, India is thought to have around 3,100 main battle tanks, including Arjun, T-72 and T-90 models.

Pakistan has around 2,500, which include Al-Khalid, T-80, T-54/55, Type-59/Al Zarrar, Type-69 and Type 85 models.

Each country also has a significant air force. India has a mixture including Dassault Rafale fighters, Sukhoi Su-30s and MiG-29s, MiG-27s and MiG-21s.

Pakistan has Chinese J-10s and JF-17s, as well as American F-16s, Mirage 3s and Mirage 5s.

The two countries may be closer to parity in their nuclear weapons.

India conducted its first nuclear test in 1974 and Pakistan became a nuclear power in 1998.

India has never declared the size of its nuclear armament, but one assessment places the country’s stockpile at 160 nuclear warheads, according to the Centre for Arms Control and Nuclear Proliferation. These can be deployed in land-based ballistic missiles, submarine-launched missiles and aircraft with nuclear bombs and missiles.

Pakistan is estimated to have around 170 warheads and nuclear-capable ballistic missiles of varying ranges. The country can also launch the weapons from planes. In 2017, Pakistan test-fired a submarine-launched missile, though this is not yet thought to be ready for use.

Even a small nuclear exchange between India and Pakistan could kill 20 million people in a week, according to the Centre for Arms Control and Nuclear Proliferation.
paserbyp: (Default)
On stage at Microsoft’s 50th anniversary celebration in Redmond earlier this month, CEO Satya Nadella showed a video of himself retracing the code of the company’s first-ever product, with help from AI.

“You know intelligence has been commoditized when CEOs can start vibe coding,” he told the hundreds of employees in attendance.

The comment was a sign of how much this term—and the act and mindset it aptly describes—have taken root in the tech world. Over the past few months, the normally exacting art of coding has seen a profusion of ✨vibes✨ thanks to AI.

The meme started with a post from former Tesla Senior Director of AI Andrej Karpathy in February. Karpathy described it as an approach to coding “where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.”

The concept gained traction because it touched on a transformation—a vibe shift?—that was already underway among some programmers, according to Amjad Masad, founder and CEO of AI app development platform Replit. As LLM-powered tools like Cursor, Replit, and Windsurf—which is reportedly in talks to be acquired by OpenAI—have gotten smarter, AI has made it easier to just…sort of…wing it.

“Coding has been seen as this—as hard a science as you can get. It’s very concrete, mathematical structure, and needs to be very precise,” Masad told Tech Brew. “What is the opposite of precision? It is vibes, and so it is communicating to the public that coding is no longer about precision. It’s more about vibes, ideas, and so on.”

The rise of automated programming could transform the field of software development. Companies are already increasingly turning to AI platforms to expedite coding work, data from spend management platform Ramp shows. While experts say coding skills are needed to debug and understand context while vibe coding, AI will likely continue to bring down the barrier to entry for creating software.

Coding has long been one of the most intuitive use cases for LLMs. OpenAI first introduced Codex, its AI programming tool based on GPT-3, more than a year before the debut of ChatGPT in 2022. Companies of all kinds often tell us that code development work is one of the first places they attempt to apply generative AI internally.

But the act of vibe coding describes a process beyond simple programming assistance, according to Karpathy’s original post. It’s an attitude of blowing through error messages and directing the AI to perform simple tasks rather than doing it oneself—and trusting that the AI will sort it all out in the end.

“It’s not really coding—I just see stuff, say stuff, run stuff, and copy-paste stuff, and it mostly works,” he wrote.

Masad said he builds personal apps like health tracking tools and data dashboards at work with Replit, which is one of the less coding-heavy of these platforms. Sometimes, he will attempt to spin up a substitute tool if he doesn’t want to pay for an enterprise software subscription. He recently used the platform to make a YouTube video downloader because he was sick of ads on existing websites.

Srini Iragavarapu, director of generative AI applications and developer experiences at Amazon Web Services, told us that coding tools like Amazon Q Developer have helped his software developer team more easily switch between coding languages they were previously unfamiliar with. AI is not fully automating coding works, he said, but allowing developers to get up to speed on new tasks more easily.

“The time to entry, and even to ramp up to newer things, is what is getting reduced drastically because of this,” Iragavarapu said. “[It] means now you’re chugging out features for customers a lot faster to solve their own sets of problems as well.”

Data from corporate spend management platform Ramp showed that business spending on AI coding platforms like Cursor, Lovable, and Codeium (now Windsurf) grew at a faster clip in the first months of this year than model companies more broadly. Ramp economist Ara Kharazian said this difference was significant despite the comparison being between smaller companies and more established ones.

“The kind of month-over-month growth that we’re seeing right now is still pretty rare,” Kharazian said. “If the instinct is to think that vibe coding is something that’s caught on in the amateur community or by independent software engineers just making fun tools…we’re also seeing this level of adoption in high-growth software companies, everything from startups to enterprise, adoption across sectors, certainly concentrated in the tech sector, but by fairly large companies that are spending very large amounts of money onboarding many of their users and software engineers onto these tools.”

Not everyone agrees that vibe coding is quite ready to transform the industry. Peter Wang, chief AI and innovation officer and co-founder of data science and AI distribution platform Anaconda, said it’s currently more useful for senior developers who know the specific prompts to create what they need, and how to assemble and test those pieces.

“It’s definitely the beginning of something interesting, but in its current form, it’s quite limited,” Wang said. “It’s sort of like if someone who’s already an industrial designer goes and 3D prints all the parts of a car, versus someone who’s not an industrial designer trying to 3D print a whole car from scratch. One’s going to go way better than the other.”

Wang said he thinks that vibe coding will really start to come into its own when it can yield modular parts of software that even an amateur coder might easily assemble into whatever program they need.

“What I’m looking for is the emergence of something like a new approach to programs that makes little modular pieces that can be assembled more robustly by the vibe coding approach,” Wang said. “We don’t really have that Easy Bake thing yet. Right now, it’s like, ‘Here’s the recipe. Go cook the entire meal for me.’...I think if we can actually get to that point, then it’ll unlock a world of possibilities.”
paserbyp: (Default)
На русском языке вышла новая книга(опубликована на греческом и английском в 2023 году: https://archive.org/details/technofeudalism-what-killed-capitalism-2023-yanis-varoufakis) греческого левого радикала Яниса Варуфакиса о том, что убило капитализм Она называется «Технофеодализм» и посвящена тому, что мы все теперь «облачные крепостные».

Конфликт Яниса Варуфакиса с западными политическими элитами начался еще в 2015 году, когда он занимал пост министра финансов Греции, переживавшей тяжелый кризис. С тех пор он постоянно критикует с левых позиций политические и экономические решения ЕС и США. Он ругает их на публичных мероприятиях в Лондоне и Брюсселе, в своих книгах, на своем канале в YouTube, в медиа и документальных фильмах.

Время от времени европейские политики ему отвечают. Например, в апреле 2024 года Германия запретила Варуфакису принимать участие в мероприятиях на ее территории, а затем, по сообщениям СМИ, даже въезд в страну. Что, конечно же, играет ему на руку: он превратился в одну из главных звезд европейского левого движения. Его дебаты(https://www.youtube.com/watch?v=Ghx0sq_gXK4) с философом Славоем Жижеком смотрят больше полумиллиона зрителей, а легендарные режиссеры вроде Коста-Гавраса снимают о нем байопики.

По мнению Варуфакиса, мечта всех левых на земле наконец-то исполнилась: капитализм мертв. Но вот незадача: на смену ему пришел не социализм, как предсказывал Маркс, а формация, которая оказалась гораздо более уродливой, — технофеодализм. Совсем уж упрощая, реальность, в которой политическая и экономическая власть сконцентрирована в руках нового правящего класса — IT-олигархов.

На возникновение технофеодализма, по мнению Варуфакиса, повлияли три исторических процесса:

1) Политическая гегемония США после Второй мировой войны, увенчавшаяся победой в холодной войне в 1991 году.

2) Экономические кризисы, вызванные глобализацией капитализма: ликвидация с 1971 года Бреттон-Вудской системы денежных отношений и торговых расчетов, финансовый кризис 2007–2008 годов и прочие.

3) Создание интернета, его развитие и приватизация крупнейшими IT-компаниями США и Китая.

Варуфакис рассказывает об этом, ссылаясь то на академические источники, то на поп-культуру, то на античную мифологию. Он опирается на собственный богатый исследовательский опыт в университетах Великобритании, Австралии и Греции.

О первых двух пунктах он уже писал в предыдущих книгах. Самое интересное начинается, когда Варуфакис переходит к третьему пункту — истории о том, как интернет разрушил «эволюционную приспособленность капитализма».

Если прежде капитализм опирался в воспроизводстве своей власти на земной рыночный капитал, то с созданием облачного капитала — интернета — он эту власть утратил. Облачный капитал позволил владельцам Amazon и Alibaba стать новым правящим классом: они не зависят от правил традиционного капиталистического рынка и имеют огромное влияние на политическую сцену в ведущих экономиках мира. К слову, Варуфакис опубликовал свою книгу до того, как самый влиятельный технофеодал современности Илон Маск стал влиятельной фигурой в администрации США.

Варуфакис настаивает, что мы наблюдаем именно гибель капитализма, а не очередную метаморфозу. Феодалы наших дней больше не интересуются прибылью (двигателем покойного капитализма, свободного рынка). Куда важнее для них облачная рента — пользовательская необходимость платить за доступ к интернету (облачному капиталу, который исключает конкуренцию).

Ситуация усугубляется тем, что пользователи не только платят облачную ренту, но еще и взаимодействуют без перерыва с облачным капиталом. Например, когда умная колонка подслушивает вас, или когда вы делитесь персональными данными с онлайн-платформами, или находитесь на поводке у алгоритмов социальных сетей. «Настоящая революция, которую облачный капитал совершил в отношении человечества, — пишет Варуфакис, — это превращение миллиардов из нас в добровольных облачных крепостных, охотно работающих бесплатно, чтобы воспроизводить облачный капитал на благо его владельцев».

Описав новую экономическую реальность, Варуфакис делает неутешительные политические выводы. Если в ХХ веке американский капитализм был в конфронтации с советским социализмом, то в ХХI веке — и особенно в связи с российско-украинской войной — обострилась борьба между американской и китайской моделями технофеодализма. Ни одна из них не вызывает у автора симпатий: обе так или иначе обрекают либеральную личность на смерть.

Варуфакис не сообщает читателям чего-то радикально нового. Разговоры о новом политическом средневековье — давно уже общее место. Еще во второй половине восьмидесятых Умберто Эко опубликовал пророческое эссе «Средние века уже начались». На антидемократические последствия слияния капитализма с цифровыми технологиями левые теоретики указывают постоянно — об этом писала, например, Шошана Зубофф в «Эпохе надзорного капитализма».

Чтобы понять, что нового привносит в эту дискуссию Варуфакис, надо лучше представить себе его антагониста. А это не столько либеральный центр ЕС, сколько альтернативные правые. Политическую идеологию цифровой монархии, основанной на технофеодальной экономике, сформулировали и популяризовали «темные просветители», в первую очередь Кёртис Ярвин. Их идеи оказали значительное влияние на многих айтишников, а также чиновников администрации Дональда Трампа, включая вице-президента Джей Ди Вэнса.

Варуфакис подвергает жесткой марксистской критике как раз мир правого технофеодализма, о котором мечтают американские техноэлиты. Хотя почему мечтают? Они уже пытаются его воплотить, чтобы продолжить, по мнению Варуфакиса, соперничество с Китаем.

Да, вполне возможно, что не все гротескные понятия и идеи, предложенные Варуфакисом, приживутся в социальных и гуманитарных науках. Но сомнений, что его книга — это гиперактуальное размышление о пугающей связи политических амбиций ведущих держав, их экономических интересов и новых технологий, по прочтении этой книги у читателя не останется.
paserbyp: (Default)
В Москве возбудили уголовное дело о применении насилия в отношении представителя власти (статья 318 УК) против бывшего следователя КГБ Александра Цопова, которого приставы избили в Басманном суде.

70-летний Цопов выступает защитником 74-летней «гражданки СССР» Валентины Реуновой, которая называет себя «председателем Верховного Совета СССР». В январе 2024 года ее задержали по обвинению в оправдании терроризма (часть 2 статьи 205.2 УК) и склонении к насильственному захвату власти в РФ (статья 205.1 УК) из-за стрима в ютьюбе. Она находится под домашним арестом. При этом статуса адвоката у Цопова нет.

Конфликт с приставами произошел 23 апреля в Басманном суде Москвы. По словам Цопова, после 18:00 приставы потребовали, чтобы он и другие слушатели покинули суд, поскольку рабочий день завершился. При этом назначенное на тот день заседание по делу Реуновой еще не началось.

Цопов сделал замечание приставам, что они «разгуливают с фашистской фасцией» на шевронах, а когда один из них по фамилии Новиков начал, как утверждается, «выволакивать» дочь Реуновой на лестницу, схватил его за руки. В итоге оба упали на скамейку.

Судя по видео из коридора суда, к мужчинам подошли еще два пристава, после чего Цопова повалили на пол и надели на него наручники. При этом Новиков угрожал насилием бывшему следователю КГБ: «Я тебя задушу нахуй… Руки отдал, щас по почкам въебу, блять». Его коллега пнула лежащего Цопова со словами «Иди на хуй, тварь старая».

У Цопова пошла кровь из носа. Приехавшие в суд врачи скорой помощи оказали ему первую помощь и отвезли в больницу, после чего его доставили в РОВД «Красносельское». Там бывший следователь КГБ написал заявление с требованием возбудить против пристава Новикова уголовное дело о превышении служебных полномочий.

Новикова вскоре отпустили из полиции, а Цопова задержали. На следующее утро его доставили в Мещанский межрайонный следственный отдел, где в отношении него возбудили уголовное дело.

Самое интересное в этом деле, что «Граждане СССР» отрицают распад Советского Союза, пользуются советскими паспортами, считают нынешние российские власти нелегитимными и отказываются платить налоги, счета ЖКХ и штрафы. Это движение (под разными названиями, одно из них «СССР») включено в России в перечень экстремистских организаций.

Подробности: https://x.com/mediazzzona/status/1916487439817834512
paserbyp: (Default)
The app that serves up hot takes on how to fix the world recently got one from its founding father. Twitter (now X) founder Jack Dorsey posted pithily: “delete all IP law,” to which the app’s stepfather, Elon Musk, replied with even greater brevity: “I agree.”

So, what do the two tech bigwigs have against laws restricting the commercial use of patented inventions and copyrighted works of creative expression? It’s probably got to do with how they impact the current talk of the town in tech: AI models trained on copyrighted works produced through hours of human chin-scratching.

Dorsey’s call to Ctrl A+Delete terabytes of laws regulating the monetization of human ingenuity garnered nearly 5,000 replies:

* Tech investor Chris Messina commented that “automated IP fines/3-strike rules for AI infringement may become the substitute for putting poor people in jail for cannabis possession.”

* Tech entrepreneur and attorney Nicole Shanahan disagreed, saying deletion wasn’t reasonable but that she’s open to discussing IP reform.

* Writer Lincoln Michel suggested that Dorsey and Musk’s anti-IP stance is hypocritical, claiming that “none of Jack or Elon’s companies would exist without IP law.”

Since it’s hard to boil this all down to 280 characters, let’s get into the complicated legal and business issues behind the social media squabble.

Musk and Dorsey are members of the Silicon Valley clique convinced that current IP regulations are as conducive to tech advances as human-operated toll booths are to speeding up traffic. Dorsey is a longtime champion of open-source software. In 2019, he founded the Twitter clone Bluesky as an open-source project, and his company Block recently released the AI agent-building application called Goose, which is free for anyone to use.

Before that, Musk once said that “patents are for the weak”:

* He famously declared a decade ago that Tesla wouldn’t sue anyone who uses its tech “in good faith,” though it did subsequently end up in a patent dispute with an Australian electronics company.

* The first version of Musk’s AI bot, Grok, was partially open-source, pitting it philosophically against the proprietary (aka not free to use for profit) OpenAI models.

Intellectual property law professor Dennis Crouch claims that Dorsey and Musk don’t like IP law because it impedes their business interest as tech moguls, since these laws are meant to preserve small enterprises against corporate behemoths.

Unsurprisingly, the biggest advocates for compensating creatives for their work that gets used to train AI are…creatives. Michel declared in his X response that Musk and Dorsey simply “hate artists.”

More than 30,000 creators recently signed a Statement on AI Training almost as succinct as Dorsey’s post. It said: “The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.” Similar sentiments have been shared by visual and musical artists, as well as journalists, but the legal questions remain unsettled:

* The New York Times is suing OpenAI for copyright infringement, alleging that the company used its content illegally to train ChatGPT. However, several news organizations, like News Corp, Axel Springer (Morning Brew’s parent company), and Time magazine, have entered licensing agreements with AI companies.

* A group of publishers and authors, including Sarah Silverman and Junot Díaz, are suing Meta, alleging that it used their copyrighted works to train its Llama AI models without compensating them. Meta claims that feeding its AI training algorithm the works of literature constituted “fair use.”

What is fair use?

It’s the legal term for when copyright-protected content can be used without the owner’s permission for a “transformative” purpose such as criticism, comment, news reporting, teaching, scholarship, or research—for example, a poem quoted in a news article, or an SNL parody of the latest Severance episode. Legal scholars say judges in the creators vs. AI companies cases will have to consider the complex technicalities of exactly how the AI was trained using proprietary content and whether it meets the definition of fair use.

Experts say that IP law needs to be updated to keep pace with technological advancements and the evolving distribution of content. The breakneck pace of AI development creates even more urgency for these updates(More details: https://hls.harvard.edu/today/is-the-law-playing-catch-up-with-ai).

Some warn that a global patchwork of laws could complicate AI development and have called for the establishment of international standards.

Legislators worldwide have been working to revise IP laws for the age of AI, aiming to strike a balance between innovation and fairly compensating creators. Some countries are considering a more pro-AI approach, like the UK, where the government is weighing a controversial rule that would let companies use copyrighted works without permission if IP owners don’t opt out.

4Chan

Apr. 16th, 2025 09:10 am
paserbyp: (Default)
Starting on Monday night, users began reporting a mass outage at the 4chan.org domain, which has persisted for the last 12 hours, according to Downdetector.com.

But during the outage, users spotted evidence that 4chan suffered a breach that enabled a hacker to gain access to the site. This includes a screenshot that apparently shows an account from 4chan’s owner Hiroyuki Nishimura writing: “LOL HACKED I LOVE DICKS.”

Another post from the hijacked Nishimura’s account indicates the hacker gained access to the backend administrative site for 4chan. The same screenshot shows that 4chan runs on an old version of PHP, a scripting language for websites.

As a result, users suspect the hacker exploited age-old vulnerabilities in 4chan to conduct the takeover. A rival imageboard at Soyjak.party has also been celebrating the site’s shutdown.

t’s possible someone at Soyjak.party was involved in the hack since the 4chan board for questions and answers was briefly changed to say “SOYJAK.PARTY WON.” The Soyjak.party site has also been posting screenshots that show the hacker was able to access moderator functions for 4chan. This includes accessing the ability to ban 4chan users, revealing their IP address, ISP, and geographic location.

In addition, links have appeared on Soyjak and on another web forum, Kiwi Farms, that claim to contain data stolen from 4chan, including the usernames and email addresses for hundreds of moderators. So, it’s possible the hacker may have stolen email address information for all registered users of the site.
paserbyp: (Default)
Cybersecurity researchers are warning of a new type of supply chain attack, Slopsquatting, induced by a hallucinating generative AI model recommending non-existent dependencies.

According to research by a team from the University of Texas at San Antonio, Virginia Tech, and the University of Oklahama, package hallucination is a common thing with Large Language Models (LLM)-generated code which threat actors can take advantage of.

“The reliance of popular programming languages such as Python and JavaScript on centralized package repositories and open-source software, combined with the emergence of code-generating LLMs, has created a new type of threat to the software supply chain: package hallucinations,” the researchers said in a https://arxiv.org/pdf/2406.10279.

From the analysis of 16 code-generation models, including GPT-4, GPT-3.5, CodeLlama, DeepSeek, and Mistral, researchers observed approximately a fifth of the packages recommended to be fakes.

According to the researchers, threat actors can register hallucinated packages and distribute malicious codes using them.

“If a single hallucinated package becomes widely recommended by AI tools, and an attacker has registered that name, the potential for widespread compromise is real,” according to a Socket analysis of the research. “And given that many developers trust the output of AI tools without rigorous validation, the window of opportunity is wide open.”

Slopsquatting, as researchers are calling it, is a term first coined by Seth Larson, a security developer-in-residence at Python Software Foundation (PSF), for its resemblance to the typosquatting technique. Instead of relying on a user’s mistake, as in typosquats, threat actors rely on an AI model’s mistake.

A significant number of packages, amounting to 19.7% (205,000 packages), recommended in test samples were found to be fakes. Open-source models –like DeepSeek and WizardCoder– hallucinated more frequently, at 21.7% on average, compared to the commercial ones (5.2%) like GPT 4.

Researchers found CodeLlama ( hallucinating over a third of the outputs) to be the worst offender, and GPT-4 Turbo ( just 3.59% hallucinations) to be the best performer.

These package hallucinations are particularly dangerous as they were found to be persistent, repetitive, and believable.

When researchers reran 500 prompts that had previously produced hallucinated packages, 43% of hallucinations reappeared every time in 10 successive re-runs, with 58% of them appearing in more than one run.

The study concluded that this persistence indicates “that the majority of hallucinations are not just random noise, but repeatable artifacts of how the models respond to certain prompts.” This increases their value to attackers, it added.

Additionally, these hallucinated package names were observed to be “semantically convincing”. Thirty-eight percent of them had moderate string similarity to real packages, suggesting a similar naming structure. “Only 13% of hallucinations were simple off-by-one typos,” Socket added.

While neither the Socket analysis nor the research paper mentioned any in-the-wild Slopsquatting instances, both cautioned protective measures. Socket recommended developers install dependency scanners before production and runtime to fish out malicious packages. Rushing through security testing is one of the reasons AI models succumb to hallucinations. Recently, OpenAI was blamed for slashing its models’ testing time and resources significantly, exposing its usage to significant threats.
paserbyp: (Default)
Oracle has continued to downplay a data breach it suffered earlier this year, insisting in an email sent to customers this week that the hack did not involve its core platform, Oracle Cloud Infrastructure (OCI).

Normally, a denial like this would be the end of the story, but the circumstances of this breach and Oracle’s confusing response to it over recent weeks have left some questioning the company’s account of the incident.

This week’s email, forwarded to this publication by Oracle, claimed that the incident involved “two obsolete servers” unconnected to the OCI or any customer cloud environments.

“Oracle would like to state unequivocally that the Oracle Cloud — also known as Oracle Cloud Infrastructure or OCI — has NOT experienced a security breach,” stated the letter.

“No OCI customer environment has been penetrated. No OCI customer data has been viewed or stolen. No OCI service has been interrupted or compromised in any way,” it continued.

No usable passwords were exposed because these were “encrypted and/or hashed.”

“Therefore, the hacker was not able to access any customer environments or customer data,” the email concluded.

But if the “two obsolete servers” weren’t part of the OCI system, what were they part of? And what, if any, customer data did the hacker access? At this point, the opinions of security researchers and the counter-assertions by Oracle, start to diverge.

The fact that a breach of some kind had occurred was first made public in March, when a hacker using the moniker ‘rose87168’ publicized on a breach forum their theft of six million single sign on (SSO) and Lightweight Directory Access Protocol (LDAP) credentials, among other sensitive data, allegedly stolen from the Oracle Cloud platform.

If true, that would be a big deal; SSO and LDAP credentials, even if competently hashed, are not something any cloud provider or customer would want to be in the hands of a third party.

The hacker told Bleeping Computer that they gained access to the Oracle system in February, after which they had attempted (and failed) to extort payment from Oracle in return for not releasing the data.

But even if the hashes remained secure, other sensitive data could be used to mount targeted attacks, noted security company Trustwave:

“The dataset includes PII, such as first and last names, full display names, email addresses, job titles, department numbers, telephone numbers, mobile numbers, and even home contact details,” wrote Trustwave’s researchers, pointing out that the consequences of such a breach could be expensive.

“For the organizations affected, a leak like this one could result in data breach liabilities, regulatory penalties, reputational damage, operational disruption, and long-term erosion of client trust,” they wrote.

Oracle subsequently denied the breach claim, telling the media: “The published credentials are not for the Oracle Cloud. No Oracle Cloud customers experienced a breach or lost any data.”

In early April, the company changed tack slightly, admitting that it had been breached, but insisting that the data had been taken from a “legacy environment” (aka Oracle Classic) dating back to 2017. That story claimed that Oracle had started contacting customers, mentioning that the FBI and CrowdStrike were investigating the incident.

This incident was in addition to a separate data breach – described as a “cybersecurity event” – affecting Oracle’s healthcare subsidiary, Oracle Health.

So far so good regarding Oracle’s denials, except that the hacker subsequently shared data showing their access to login.us2.oraclecloud.com, a service that is part of the Oracle Access Manager, the company’s IAM system used to control access to Oracle-hosted systems.

It also emerged that some of the leaked data appeared to be from 2024 or 2025, casting doubt on Oracle’s claim that it was old.

So, was Oracle’s main OCI platform breached or not? Not everyone is convinced by the company’s flat denials. According to prominent security researcher Kevin Beaumont, the company was basically “wordsmithing” the difference between the Oracle Classic servers it admits were breached, and OCI servers, which it still maintains were not.

“Oracle rebadged old Oracle Cloud services to be Oracle Classic. Oracle Classic has the security incident,” noted Beaumont in a dissection of the incident and Oracle’s response on Medium.

“Oracle are denying it’s on ‘Oracle Cloud’ by using this scope – but it’s still Oracle cloud services, that Oracle manage. That’s part of the wordplay.” Oracle had also quietly contacted multiple customers to confirm some kind of breach, he said.

This leaves interested parties with the unsatisfactory sense that something untoward has happened, without it being clear what.

For now, Oracle is sticking to its guns that its main OCI platform is not involved, but perhaps the confusion could have been avoided with better communication.

Suffering a breach is hugely challenging for any organization but it sometimes pales beside the problems of communicating with customers, journalists, and the army of interested researchers ready to pick apart every ambiguity. Weeks on from the breach becoming public, those ambiguities have yet to be fully cleared up.
paserbyp: (Default)
The generative AI revolution is remaking businesses’ relationship with computers and customers. Hundreds of billions of dollars are being invested in large language models (LLMs) and agentic AI, and trillions are at stake. But GenAI has a significant problem: The tendency of LLMs to hallucinate. The question is: Is this a fatal flaw, or can we work around it?

If you’ve worked much with LLMs, you have likely experienced an AI hallucination, or what some call a confabulation. AI models make things up for a variety of reasons: erroneous, incomplete, or biased training data; ambiguous prompts; lack of true understanding; context limitations; and a tendency to overgeneralize (overfitting the model).

Sometimes, LLMs hallucinate for no good reason. Vectara CEO Amr Awadallah says LLMs are subject to the limitations of data compression on text as expressed by the Shannon Information Theorem. Since LLMs compress text beyond a certain point (12.5%), they enter what’s called “lossy compression zone” and lose perfect recall.

That leads us to the inevitable conclusion that the tendency to fabricate isn’t a bug, but a feature, of these types of probabilistic systems.

What do we do then?

Users have come up with various methods to control or for hallucinations, or at least to counteract some of their negative impacts.

For starters, you can get better data. AI models are only as good as the data they’re trained on. Many organizations have raised concerns about bias and the quality on their data. While there are no easy fixes to improving data quality, customers that dedicate resources to better data management and governance can make a difference.

Users can also improve the quality of LLM response by providing better prompts. The field of prompt engineering has emerged to serve this need. Users can also “ground” their LLM’s response by providing better context through retrieval-augmented generation (RAG) techniques.

Instead of using a general-purpose LLM, fine-tuning open source LLMs on smaller sets of domain- or industry-specific data can also improve accuracy within that domain or industry. Similarly, a new generation of reasoning models, such as DeepSeek-R1 and OpenAI o1, that are trained on smaller domain-specific data sets, include a feedback mechanism that allows the model to explore different ways to answer a question, the so-called “reasoning” steps.

Implementing guardrails is another technique. Some organizations use a second, specially crafted AI model to interpret the results of the primary LLM. When a hallucination is detected, it can tweak the input or the context until the results come back clean. Similarly, keeping a human in the loop to detect when an LLM is headed off the rails can also help avoid some of LLM’s worst fabrications.

When ChatGPT first came out, its hallucination rate was around 15% to 20%. The good news is the hallucination rate appears to be going down.

For instance, Vectara’s Hallucination Leader Board uses the Hughes Hallucination Evaluation Model–which calculates the odds of an output being true or false on a range from 0 to 1. Vectara’s hallucination board currently shows several LLMs with hallucination rates below 1%, led by Google Gemini-2.0 Flash. That’s a big improvement form a year ago, when Vectara’s leaderboard showed the top LLMs had hallucination rates of around 3% to 5%.

Other hallucinations measures don’t show quite the same improvement. The research arm of AIMultiple benchmarked nine LLMs on the capability to recall information from CNN articles. The top-scoring LLM was GPT-4.5 preview with a 15% hallucination rate. Google’s Gemini-2.0 Flash at 60%.

“LLM hallucinations have far-reaching effects that go well beyond small errors,” AIMultiple’s Principal Analyst Cem Dilmegani wrote in a March 28 blog post. “Accurate information produced by an LLM could result in legal ramifications, especially in regulated sectors such as healthcare, finance, and legal services. Organizations could be penalized severely if hallucinations caused by generative AI lead to infractions or negative consequences.”

One company working to make AI usable for some high-stakes use cases is the search company Pearl. The company combines an AI-powered search engine along with human expertise in professional services to minimize the odds that a hallucination will reach a user.

Pearl has taken steps to minimize the hallucination rate in its AI-powered search engine, which Pearl CEO Andy Kurtzig said is 22% more accurate than ChatGPT and Gemini out of the box. The company does that by using the standard techniques, including multiple models and guardrails. Beyond that, Pearl has contracted with 12,000 experts in fields like medicine, law, auto repair, and pet health who can provide a quick sanity check on AI-generated answers to further drive the accuracy rate up.

“So for example, if you have a legal issue or a medical issue or an issue with your pet, you’d start with the AI, get an AI answer through our superior quality system,” Kurtzig told BigDATAwire. “And then you’d get the ability to then have to get a verification from an expert in that field, and then you can even take it one step further and have a conversation with the expert.”

Kurtzig said there are three major unresolved problems around AI: the persistent problem of AI hallucinations; mounting reputational and financial risk; and failing business models.

“Our estimate on the state of the art is roughly a 37% hallucination level in the professional services categories,” Kurtzig said. “If your doctor was 63% right, you would be not only pissed, you’d be suing them for malpractice. That is awful.”

Big, rich AI companies are running a real financial risk by putting out AI models that are prone to hallucinating, Kurtzig said. He cites a Florida lawsuit filed by the parents of a 14-year-old boy who killed himself when an AI chatbot suggested it(More details: https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0).

“When you’ve got a system that is hallucinating at any rate, and you’ve got really deep pockets on the other end of that equation…and people are using it and relying, and these LLMs are giving these highly confident answers, even when they’re completely wrong, you’re going to end up with lawsuits,” he said.

The CEO of Anthropic recently made headlines when he claimed that 90% of coding work would be done by AI within months. Kurtzig, who employs 300 developers, doesn’t see that happening anytime soon. The real productivity gains are somewhere between 10% and 20%, he said.

The combination of reasoning models and AI agents is supposed to be heralding a new era of productivity, not to mention a 100x increase in inference workloads to occupy all those Nvidia GPUs, according to Nvidia CEO Jensen Huang. However, while reasoning models like DeepSeek can run more efficiently than Gemini or GPT-4.5, Kurtzig doesn’t see them increasing the state of the art.

“They’re hitting diminishing returns,” Kurtzig said. “So each new percentage of quality is really expensive. It’s a lot of GPUs. One data source I saw from Georgetown says to get another 10% improvement, it’s going to cost $1 trillion.”

Ultimately, AI may pay off, he said. But there’s going to be quite a bit of pain before we get to the other side.

“Those three fundamental problems are both huge and unsolved, and they are going to cause us to head into the trough of disillusionment in the hype cycle,” he said. “Quality is an issue, and we’re hitting diminishing returns. Risk is a huge issue that is just starting to emerge. There’s a real cost there in both human lives as well as money. And then these companies are not making money. Almost all of them are losing money hand over fist.

“We got we got a trough of disillusionment to get through,” he added. “There is a beautiful plateau of productivity out on the other end, but we haven’t hit the trough of disillusionment yet.”
paserbyp: (Default)
The general public is far more pessimistic about the impact of AI than “AI experts” who work in the field, a new report from the Pew Center reveals(More Details: https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence).

About 57% of AI experts say AI will have a very or somewhat positive impact on the US over the next 20 years, compared to 17% of the general public. Meanwhile, over 43% of the US public says AI will hurt them rather than benefit them, in contrast to just 24% among experts.

This divide isn't the only major split when it comes to how AI is perceived. The research also picked up a big disagreement between the male and female experts they surveyed.

Over six in 10 (63%) of the male experts agreed with the statement that AI will have a very or somewhat positive impact on the US over the next two decades; that falls to 36% for women in the cohort. Just under a third of female experts (30%) said that AI made them more excited than concerned, compared to 53% of male experts.

Some areas prompted a lot more fear and pessimism among respondents than others. Only 9% of people in the US feel that AI will have a positive impact on elections, amid widespread concerns about deepfakes, rising to 11% among experts (one of the few questions on which both demographics essentially agreed).

One of the largest splits in the study is on how AI will impact work. Only 23% of Americans predict AI will have a positive impact on how people do their jobs, compared to 73% of AI experts.

Still, there are areas where large chunks of the US public are fairly optimistic about the impact of AI, such as in healthcare. As tech giants like Apple may be preparing to pivot toward AI-based medicine, about 44% of the US public say that AI will have a positive impact on health care, which rises to 84% among experts.

It shouldn't come as a huge surprise that the general public has concerns about the rise of AI. Some of the world’s most famous people have been openly discussing the issue for years.

Last month, Bill Gates said humans won’t be needed for “most things” in the coming age of AI, highlighting fields like medicine, teaching, and mental health as ripe for disruption. Meanwhile, singer Paul McCartney has warned that an incorrect approach to AI and creative industries could lead to lost livelihoods for musicians and other creators.

The Pew Center surveyed about 5,400 adults to get its findings, including just over 1,000 experts, who had all spoken or presented at AI conferences in the past.
paserbyp: (Default)
Looks like agentic AI has a breakout moment, but the core technology behind it has been quietly improving behind the scenes. That progress is being tracked across a series of coding benchmarks, such as SWE-bench and GAIA, leading some to believe AI agents are on the cusp of something big.

It wasn’t that long ago that AI-generated code was not deemed suitable for deployment. The SQL code would be too verbose or the Python code would be buggy or insecure. However, that situation has changed considerably in recent months, and AI models today are generating more code for customers every day.

Benchmarks provide a good way to gauge how far agentic AI has come in the software engineering domain. One of the more popular benchmarks, dubbed SWE-bench, was created by researchers at Princeton University to measure how well LLMs like Meta’s Llama and Anthropic’s Claude can solve common software engineering challenges. The benchmark utilizes GitHub as a rich resource of Python software bugs across 16 repositories and provides a mechanism for measuring how well the LLM-based AI agents can solve them.

When the authors submitted their paper, “SWE-Bench: Can Language Models Resolve Real-World GitHub Issues?” to the International Conference on Learning Representations (ICLR) in October 2023, the LLMs were not performing at a high level. “Our evaluations show that both state-of-the-art proprietary models and our fine-tuned model SWE-Llama can resolve only the simplest issues,” the authors wrote in the abstract. “The best-performing model, Claude 2, is able to solve a mere 1.96% of the issues.”(More details: https://arxiv.org/pdf/2310.06770).

That changed quickly. Today, the SWE-bench leaderboard shows the top-scoring model resolved 55% of the coding issues on SWE-bench Lite, which is a subset of the benchmark designed to make evaluation less costly and more accessible.

Huggingface put together a benchmark for General AI Assistants, dubbed GAIA, that measures a model’s capability across several realms, including reasoning, multi-modality handling, Web browsing, and generally tool-use proficiency. The GAIA tests are non-ambiguous, and are challenging, such as counting the number of birds in a five-minute vide(More details: https://huggingface.co/papers/2311.12983).

A year ago, the top score on level 3 of the GAIA test was around 14, according to Sri Ambati, the CEO and co-founder of H2O.ai. Today, an H2O.ai-based model based on Claude 3.7 Sonnet holds the top overall score, about 53.

“So the accuracy is just really growing very fast,” Ambati said. “We’re not fully there, but we are on that path.”

H2O.ai’s software is involved in another benchmark that measures SQL generation. BIRD, which stands for BIg Bench for LaRge-scale Database Grounded Text-to-SQL Evaluation, measures how well AI models can parse natural language into SQL.

When BIRD debuted in May 2023, the top scoring model, CoT+ChatGPT, demonstrated about 40% accuracy. One year ago, the top-scoring AI model, ExSL+granite-20b-code, was based on IBM’s Granite AI model and had an accuracy of about 68%. That was quite a bit below the capability of human performance, which BIRD measures at about 92%. The current BIRD leaderboard shows an H2O.ai-based model from AT&T as the leader, with an 77% accuracy rate.

The rapid progress in generating decent computer code has led some influential AI leaders, such as Nvidia CEO and co-founder Jensen Huang and Anthropic co-founder and CEO Dario Amodei, to make bold predictions about where we will soon find ourselves.

“We are not far from a world–I think we’ll be there in three to six months–where AI is writing 90 percent of the code,” Amodei said earlier this month. “And then in twelve months, we may be in a world where AI is writing essentially all of the code.”

During his keynote last week, Huang shared his vision about the future of agentic computing. In his view, we are rapidly approaching a world where AI factories generate and run software based on human inputs, as opposed to humans writing software to retrieve and manipulate data.

“Whereas in the past we wrote the software and we ran it on computers, in the future, the computers are going to generate the tokens for the software,” Huang said. “And so the computer has become a generator of tokens, not a retrieval of files. [We’ve gone] from retrieval-based computing to generative-based computing.”

Others are taking a more pragmatic view. Anupam Datta, the principal research scientist at Snowflake and lead of the Snowflake AI Research Team, applauds the improvement in SQL generation. For instance, Snowflake says its Cortex Agent’s text-to-SQL generation accuracy rate is 92%. However, Datta doesn’t share Amodei’s view that computers will be rolling their own code by the end of the year.

“My view is that coding agents in certain areas, like text-to-SQL, I think are getting really good,” Datta said at GTC25 last week. “Certain other areas, they’re more assistants that help a programmer get faster. The human is not out of the loop just yet.”

Programmer productivity will be the big winner thanks to coding copilots and agentic AI systems, he said. We’re not far from a world where agentic AI will generate the first draft, he said, and then the humans will come in and refine and improve it. “There will be huge gains in productivity,” Datta said. “So the impact will be very significant, just with copilot alone.”

H2O.ai’s Ambati also believes that software engineers will work closely with AI. Even the best coding agents today introduce “subtle bugs,” so people still need to look at it the code, he said. “It’s still a pretty necessary skill set.”

One area that’s still pretty green is the semantic layer, where natural language is translated into business context. The problem is that the English language can be ambiguous, with multiple meanings from the same phrase.

“Part of it is understanding the semantics layer of the customer schema, the metadata,” Ambati said. “That piece is still building. That ontology is still a bit of a domain knowledge.”

Hallucinations are still an issue too, as is the potential for an AI model to go off the rails and say or do bad things. Those are all areas of concern that companies like Anthropic, Nvidia, H2O.ai, and Snowflake are all working to mitigate. But as the core capabilities of Gen AI get better, the number of reasons not to put AI agents into production decreases.

Profile

paserbyp: (Default)
paserbyp

May 2025

S M T W T F S
    1 23
456 78910
11 1213 14 151617
18192021222324
25262728293031

Most Popular Tags

Syndicate

RSS Atom

Style Credit

Page generated May. 19th, 2025 02:06 am
Powered by Dreamwidth Studios
OSZAR »