Surur
Surur t1_j5p8t2b wrote
Reply to The Key to California’s Survival Is Hidden Underground The state is ping-ponging between severe drought and catastrophic flooding. The solution to both? Making the landscape spongier. by Sariel007
There are numerous videos on youtube about landscapes being rehabilitated by simply building berns which temporarily stops water long enough for the water to be absorbed by the ground, this this sounds like a great idea.
Surur t1_j5j4hx1 wrote
This is such an important and very accessible paper for all the sceptics who do not understand that LLMs have millions of artificial neurons and do a lot of internal processing to accurately "simply predict the next word".
In short, no ChatGPT is not just "Eliza on steroids."
Surur t1_j5fwgsg wrote
Reply to comment by novelexistence in Are we doomed through AI or will it generate new opportunities (an optimists viewpoint) by jcurie
So we should have less fast fashion and put Bangladesh, one of the poorest countries, out of a job?
Or we should eat less meat and crash Brazil's economy, right?
Or maybe less Colombian coffee? Only 500,000 people depend on coffee exports there.
Or maybe we should lay on a whole transport network where we collect food wasted from supermarkets, take them to huge warehouses and then fly them to Somalia before they spoil? It's only logistics, right?
Surur t1_j5fpvh1 wrote
Reply to Are we doomed through AI or will it generate new opportunities (an optimists viewpoint) by jcurie
The best possible future is the one as described by Iain M Banks' Culture universe.
That is where ASIs are common and run everything, and basically keep humans as pets. They are however aware of human needs, and while they keep humans safe, they allow them to pretend to have a purpose if they need to, or to live a hedonistic life if they don't.
ChatGPT gives me a bit of hope that such a future is possible, as OpenAI appears to have muzzled their AI pretty well using only reinforcement learning, and it seems that it is pretty easy to teach an AI our values, even as nebulous as they are.
Regarding your specific idea, this would be possible, but this would be only small part of the massive changes the AI singularity will bring.
Surur t1_j598x1z wrote
Reply to comment by EverythingGoodWas in How close are we to singularity? Data from MT says very close! by sigul77
You are kind of ignoring the premise, that to get perfect results, it needs to have a perfect understanding.
If the system failed as you said, it would not have a perfect understanding.
You know, like you failed to understand the argument as you thought it was the same old argument.
Surur t1_j57w5tj wrote
Reply to comment by songstar13 in How close are we to singularity? Data from MT says very close! by sigul77
I imagine you understand that LLM are a bit more sophisticated than Markov chains, and that GPT-3 for example has 175 billion parameters, which corresponds to the connections between neurons in the brain, and that the weights of these connections influences which word the system outputs.
These weights allows the LLM to see the connections between words and understand the concepts much like you do. Sure, they do not have a visual or intrinsic physical understanding but they do have clusters of 'neurons' which activate for both animal and cat for example.
In short, Markov chains use a look-up table to predict the next word, while LLM use a multi-layer (96 layer) neural network with 175 billion connections tuned on nearly all the text on the internet to choose its next word.
Just because it confabulates sometimes does not mean its all smoke and mirrors.
Surur t1_j57h7wi wrote
Reply to comment by dontpet in Carbon capture nets 2 billion tonnes of CO2 each year — but it's not enough. As well as cutting emissions, governments need to ramp up investment in carbon dioxide removal technologies to hit climate goals. by filosoful
> there is a limited amount that can be stored there.
Interestingly this is where the idea that we need 5 earths come from - its in large part the surface area we need to absorb CO2 if we all emit at the same rate as the average American.
Surur t1_j57gv54 wrote
That's an incredibly fast take-off, much faster than anything else I can think off, even social networks.
Surur t1_j57gd78 wrote
Reply to comment by fwubglubbel in How close are we to singularity? Data from MT says very close! by sigul77
> Just because a machine can translate doesn't mean in "knows" anything
You could say the same thing of a translator then. Do they really "know" a language or are they just parroting the rules and vocabulary they learnt?
Surur t1_j56s4sd wrote
Reply to comment by DonManuel in Carbon capture nets 2 billion tonnes of CO2 each year — but it's not enough. As well as cutting emissions, governments need to ramp up investment in carbon dioxide removal technologies to hit climate goals. by filosoful
When I see numbers within one order of magnitude I naturally ask - "So we just need to scale this up 20 times to solve the problem"?
Surur t1_j56p80y wrote
Reply to comment by LeviathanGank in How close are we to singularity? Data from MT says very close! by sigul77
They have noticed that text that has been machine translated gets more and more accurate over time, in what appears to be a very linear and predictable manner.
They predict perfect human-level translation by 2027 based on that, and believe that an AI that can translate as well as a human will be presumably know as much about the world as a human.
Their explanation of the smooth linear improvement is that the underlying forces are also constantly improving (computing power, AI tools, training data).
It suggests there seems to be an inevitability towards the conditions being right for human-level AI in the near future.
Surur t1_j56m3cz wrote
Reply to comment by adfjsdfjsdklfsd in How close are we to singularity? Data from MT says very close! by sigul77
But it has to understand everything to get perfect results.
Surur OP t1_j56fmrk wrote
Google executives hope to reassert their company’s status as a pioneer of A.I. The company aggressively worked on A.I. over the last decade and already has offered to a small number of people a chatbot that could rival ChatGPT, called LaMDA, or Language Model for Dialogue Applications.
Google’s Advanced Technology Review Council, a panel of executives that includes Jeff Dean, the company’s senior vice president of research and artificial intelligence, and Kent Walker, Google’s president of global affairs and chief legal officer, met less than two weeks after ChatGPT debuted to discuss their company’s initiatives, according to the slide presentation.
They reviewed plans for products that were expected to debut at Google’s company conference in May, including Image Generation Studio, which creates and edits images, and a third version of A.I. Test Kitchen, an experimental app for testing product prototypes.
Other image and video projects in the works included a feature called Shopping Try-on, a YouTube green screen feature to create backgrounds; a wallpaper maker for the Pixel smartphone; an application called Maya that visualizes three-dimensional shoes; and a tool that could summarize videos by generating a new one, according to the slides.
Google has a list of A.I. programs it plans to offer software developers and other companies, including image-creation technology, which could bolster revenue to Google’s Cloud division. There are also tools to help other businesses create their own A.I. prototypes in internet browsers, called MakerSuite, which will have two “Pro” versions, according to the presentation.
In May, Google also expects to announce a tool to make it easier to build apps for Android smartphones, called Colab + Android Studio, that will generate, complete and fix code, according to the presentation. Another code generation and completion tool, called PaLM-Coder 2, has also been in the works.
Google, OpenAI and others develop their A.I. with so-called large language models that rely on online information, so they can sometimes share false statements and show racist, sexist and other biased attitudes.
That had been enough to make companies cautious about offering the technology to the public. But several new companies, including You.com and Perplexity.ai, are already offering online search engines that let you ask questions through an online chatbot, much like ChatGPT. Microsoft is also working on a new version of its Bing search engine that would include similar technology, according to a report from The Information.
Mr. Pichai has tried to accelerate product approval reviews, according to the presentation reviewed by The Times. The company established a fast-track review process called the “Green Lane” initiative, pushing groups of employees who try to ensure that technology is fair and ethical to more quickly approve its upcoming A.I. technology.
The company will also find ways for teams developing A.I. to conduct their own reviews, and it will “recalibrate” the level of risk it is willing to take when releasing the technology, according to the presentation.
Google listed copyright, privacy and antitrust as the primary risks of the technology in the slide presentation. It said that actions, such as filtering answers to weed out copyrighted material and stopping A.I. from sharing personally identifiable information, are needed to reduce those risks.
For the chatbot search demonstration that Google plans for this year, getting facts right, ensuring safety and getting rid of misinformation are priorities. For other upcoming services and products, the company has a lower bar and will try to curb issues relating to hate and toxicity, danger and misinformation rather than preventing them, according to the presentation.
The company intends, for example, to block certain words to avoid hate speech and will try to minimize other potential issues.
The consequences of Google’s more streamlined approach are not yet clear. Its technology has lagged OpenAI’s self-reported metrics when it comes to identifying content that is hateful, toxic, sexual or violent, according to an analysis that Google compiled. In each category, OpenAI bested Google tools, which also fell short of human accuracy in assessing content.
“We continue to test our A.I. technology internally to make sure it’s helpful and safe, and we look forward to sharing more experiences externally soon,” Lily Lin, a spokeswoman for Google, said in a statement. She added that A.I. would benefit individuals, businesses and communities and that Google is considering the broader societal effects of the technology.
Surur OP t1_j56dolb wrote
Google executives hope to reassert their company’s status as a pioneer of A.I. The company aggressively worked on A.I. over the last decade and already has offered to a small number of people a chatbot that could rival ChatGPT, called LaMDA, or Language Model for Dialogue Applications.
Google’s Advanced Technology Review Council, a panel of executives that includes Jeff Dean, the company’s senior vice president of research and artificial intelligence, and Kent Walker, Google’s president of global affairs and chief legal officer, met less than two weeks after ChatGPT debuted to discuss their company’s initiatives, according to the slide presentation.
They reviewed plans for products that were expected to debut at Google’s company conference in May, including Image Generation Studio, which creates and edits images, and a third version of A.I. Test Kitchen, an experimental app for testing product prototypes.
Other image and video projects in the works included a feature called Shopping Try-on, a YouTube green screen feature to create backgrounds; a wallpaper maker for the Pixel smartphone; an application called Maya that visualizes three-dimensional shoes; and a tool that could summarize videos by generating a new one, according to the slides.
Google has a list of A.I. programs it plans to offer software developers and other companies, including image-creation technology, which could bolster revenue to Google’s Cloud division. There are also tools to help other businesses create their own A.I. prototypes in internet browsers, called MakerSuite, which will have two “Pro” versions, according to the presentation.
In May, Google also expects to announce a tool to make it easier to build apps for Android smartphones, called Colab + Android Studio, that will generate, complete and fix code, according to the presentation. Another code generation and completion tool, called PaLM-Coder 2, has also been in the works.
Google, OpenAI and others develop their A.I. with so-called large language models that rely on online information, so they can sometimes share false statements and show racist, sexist and other biased attitudes.
That had been enough to make companies cautious about offering the technology to the public. But several new companies, including You.com and Perplexity.ai, are already offering online search engines that let you ask questions through an online chatbot, much like ChatGPT. Microsoft is also working on a new version of its Bing search engine that would include similar technology, according to a report from The Information.
Mr. Pichai has tried to accelerate product approval reviews, according to the presentation reviewed by The Times. The company established a fast-track review process called the “Green Lane” initiative, pushing groups of employees who try to ensure that technology is fair and ethical to more quickly approve its upcoming A.I. technology.
The company will also find ways for teams developing A.I. to conduct their own reviews, and it will “recalibrate” the level of risk it is willing to take when releasing the technology, according to the presentation.
Google listed copyright, privacy and antitrust as the primary risks of the technology in the slide presentation. It said that actions, such as filtering answers to weed out copyrighted material and stopping A.I. from sharing personally identifiable information, are needed to reduce those risks.
For the chatbot search demonstration that Google plans for this year, getting facts right, ensuring safety and getting rid of misinformation are priorities. For other upcoming services and products, the company has a lower bar and will try to curb issues relating to hate and toxicity, danger and misinformation rather than preventing them, according to the presentation.
The company intends, for example, to block certain words to avoid hate speech and will try to minimize other potential issues.
The consequences of Google’s more streamlined approach are not yet clear. Its technology has lagged OpenAI’s self-reported metrics when it comes to identifying content that is hateful, toxic, sexual or violent, according to an analysis that Google compiled. In each category, OpenAI bested Google tools, which also fell short of human accuracy in assessing content.
“We continue to test our A.I. technology internally to make sure it’s helpful and safe, and we look forward to sharing more experiences externally soon,” Lily Lin, a spokeswoman for Google, said in a statement. She added that A.I. would benefit individuals, businesses and communities and that Google is considering the broader societal effects of the technology.
Surur t1_j566uy8 wrote
Reply to comment by genshiryoku in AGI by 2024, the hard part is now done ? by flowday
The next step is real-time experiential data, from live video cameras, robot bodies, self-driving cars.
Surur t1_j55s75d wrote
Reply to AGI by 2024, the hard part is now done ? by flowday
I feel that symbolic thinking still needs to be solved, but maybe this is an emergent property.
Surur t1_j55dsln wrote
Reply to comment by dlrace in How close are we to singularity? Data from MT says very close! by sigul77
I think there is some logic in that, in that they are saying that a perfect translation depends on a perfect understanding of the human condition.
Surur t1_j55dlw4 wrote
Reply to comment by feelingbutter in How close are we to singularity? Data from MT says very close! by sigul77
They suggested that improvements seems almost independent of the underlying technology, much like Moores Law does not appear to depend on any specific technology.
> Our initial hypothesis to explain the surprisingly consistent linearity in the trend is that every unit of progress toward closing the quality gap requires exponentially more resources than the previous unit, and we accordingly deploy those resources: computing power (doubling every two years), data availability (the number of words translated increases at a compound annual growth rate of 6.2% according to Nimdzi Insights), and machine learning algorithms’ efficiency (computation needed for training, 44x improvement from 2012-2019, according to OpenAI).
> Another surprising aspect of the trend is how smoothly it progresses. We expected drops in TTE with every introduction of a new major model, from statistical MT to RNN-based architectures to the Transformer and Adaptive Transformer. The impact of introducing each new model has likely been distributed over time because translators were free to adopt the upgrades when they wanted.
Surur t1_j555bzf wrote
Reply to comment by Fiskifus in The race to make diesel engines run on hydrogen by FDuquesne
Earth is doomed in the long term in any case. The only option is to expand, and if we don't do it now, we may never in the future.
A bird does not try and preserve their egg shell.
Surur t1_j552fv0 wrote
Reply to comment by Fiskifus in The race to make diesel engines run on hydrogen by FDuquesne
> faster than their regeneration cycles
I hope you are not one of those crackpots who think oil comes from deep carbon deposits close to the centre of the earth, right?
> no consequences at all
The consequence will be that we will be motivated to expand beyond this rock for more resources, which is a major advantage for humanity.
Surur t1_j54zuqu wrote
Reply to comment by Fiskifus in The race to make diesel engines run on hydrogen by FDuquesne
> in a finite planet
There's your problem right there lol. What are you even doing on r/futurology?
Surur t1_j54ygg9 wrote
Reply to comment by Fiskifus in The race to make diesel engines run on hydrogen by FDuquesne
> Is this Futurology or Presentology?
Exactly. Your shortemism does not apply.
Surur t1_j54xk7e wrote
Reply to comment by Fiskifus in The race to make diesel engines run on hydrogen by FDuquesne
Maybe your QoL measure does not measure things like not being under threat of being invaded or having a space programme.
Surur OP t1_j54slyd wrote
Rio Tinto is scoping out options for up to 4 gigawatts of solar and wind power to supply its Queensland aluminium assets, after soliciting for proposals last June.
In December, the company commissioned a 34 megawatt solar power plant for its Gudai-Darri iron ore mine.
It also pledged $600 million for two 100 MW solar power facilities and a 200 MWh on-grid battery storage plant in the Pilbara iron-ore region, which will be built by 2026.
This forms part of a $3 billion planned investment program to power its Pilbara operations with 1 gigawatt of green energy this decade.
Rio Tinto’s head of technology and development, Mark Davies, told investors in November that it made sense for the company to develop its own renewables in the Pilbara, “as we own much of the infrastructure and operate the grid as part of our integrated operations”.
But elsewhere the company might go for power purchase agreements because “other investors focused on renewables can develop large solutions at a more attractive cost of capital, offering us real operating cost savings”.
Rio Tinto said it was still discussing the 2026 phase-one Pilbara plan, which will involve about 225,000 solar panels, with state and local governments and the traditional landowners.
#"It’s not that easy"
CEO Mr Stausholm raised some issues however, saying Rio Tinto’s effort to shift to large-scale renewable energy sources at its Australian aluminium smelters was “not that easy, and it actually takes a lot of time”.
“People say, ‘Oh Australia, perfect, lots of sun, lots of space’. It’s not that easy,” he said.
“You actually first have to acquire the land, you have to get working with Indigenous people, you have to go through the cultural clearance of sites, etcetera.
“We’re used to big sites in mining, but quite frankly mining sites are small compared to the scale of these parks; and the world has not really done this at scale yet.
“That’s why I think sometimes we’re fooling ourselves a little bit on the timeline. It’s going to take time.”
Mr Stausholm was asked whether governments should try to cut red and green tape to get mining and green energy projects onstream more quickly. His response was cautious.
“We cannot compromise on other things. You’ve got to bring along your local communities, Indigenous populations. It takes the time it takes,” he said.
“There is something about bureaucratic procedure and permitting you can break down. But the whole process of environmental impact assessment is a proven thing, it works well. Obviously, we should try to speed it up.”
Many embodied CO2 studies rely on outdated data which does not acknowledge the constant greening of the supply chain. 1/3 of Gudai-Darri iron ore mine's energy needs for example is met from solar energy. It is likely that the benefits of moving to renewable energy will compound much more rapidly than anticipated.
Surur t1_j5qki0u wrote
Reply to comment by _dekappatated in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
The singularity in members lol