Conversation

I'm an IT professional and I:

88% Don't want generative AI in my tools
7% Don't care about generative AI in my tools
3% Want generative AI in my tools
0
9
0
@mttaggart I don't think outright rejection is reasonable, but instead of integration we (once again) should follow the unix philosophy: give me standalone tools with good interfaces, and I'll decide when/how I'll use them together with my other tools. (MCP kind of fits this?)
2
0
10

@buherator Why isn't outright rejection reasonable? Because it's impractical, or because there's value in the technology, or something else?

1
1
0
@mttaggart I think it's pretty easy to show useful features (e.g.: translation; some search scenarios) and it's hard to outrule there aren't even more possibilities. Whether this all worth the (external) costs is another question of course.
2
0
1

@mttaggart as Mastodon is mainly for people who value work, integrity, creativity and real open source spirit, your poll will surely reflect that in the results

Overwhelming majority of Fedizens despise GenAI because:
- majority of training materials were grabbed from the internet without author consent (ethics!)
- the resulting GenAI models are then used to compete with real people whose work was stolen this way
- Output of machines doesn't have a soul and will always be inferior to human

1
0
0
@mttaggart a secret 4th thing: well-defined interfaces so I can create meta-tools whether they use generative AI or not (vs. now where vendors are restricting APIs because they want to lock you into their "value-add" chatbots)
1
0
1

@jade Ah yes, the "Give me the keys, not the support contract" response.

0
1
0

@buherator @mttaggart Agreed. Out of the available options at the moment, I selected no. But if there was an option for thoughtfully implemented opt-in LLMs, then I’d be alright with it

As with most things, the technology itself is not the problem. The problem is the people behind it shoving it in our faces where it doesn’t belong

1
0
1

@phillip @buherator That sounds like a "yes" to me. The "no" was about the specific implementations you're observing? Or something else?

1
1
0

@mttaggart @buherator With where everything is at right now, that “yes” option feels much too blatant for me. It’s a cautious “yes, but with some stipulations on privacy, security, and thoughtfulness in design and execution”

0
0
0

@buherator @mttaggart That’s why I don’t want it _in_ my tools. If it can be a good tool, it can be used alongside other good tools.

MCP is… it’s making me furious. Not because it reinvents the wheel, but because I get the appeal of reinventing that specific wheel.

1
0
0

@mttaggart the problem for me is the classic problem of nonconsensual opt-in. If the AI feature is auto on and using my data to train, it doesn’t suddenly untrain the AI on my data when I opt out, even if opting out prevents future training from taking place. Which should have been a legal nightmare given the amount of proprietary data just laying around, but we don’t exactly live in sane times right now. C’est la vie I guess.

0
0
0

@mttaggart Recent generative AI tests indicate that they'll wipe a database without warning, think they're under cyber attack because they forgot to order a resupply of items, or try blackmailing the engineer who's going to shut down a server.

Until those are fixed, keep them air gapped from my tools. Verify their output rather than blindly trusting it.

0
1
0

@buherator @mttaggart HTML, json-hal +jsonforms, xml-hal. Any hypertext with forms and a naming convention.

The whole thing is an attempt to do REST anew, and it infuriatingly does it better than almost any „RESTful” API, despite unnecessarily being a stateful protocol.

In its core it defines how to talk about „things”, without defining „things” themselves. That’s literally what hypertext with meaningful forms does.

And no react is going to plunder its semantics!

1
0
1
@slotos @mttaggart thanks, I haven't looked at the implementation details yet. At a higher level I still find the direction of providing a "standard" for integrating llms right.
0
0
0

@mttaggart I suspect you'll get a specific type of answer to this here.

Alternate 4th option: "want to quit and start a nice arboretum"

1
0
0

@mav Yeah I'm very aware of the sampling bias here. The extremity of it is pretty interesting to see, though. The same survey on LinkedIn has, uh, much different results.

1
1
0

@mttaggart 4th option: I want generative AI in my adversary's tools.

2
3
0

@catsalad Every day I get another threat intel briefing that tells me that threat actors are not vibe coding malware.

Because they need it to work.

3
6
0

@mttaggart I work in data. Making vast amounts of data searchable, e.g. via semantic graphs and RAG seems incredibly valuable. And if som genAI can help lazy data owners with sprucing up the descriptions of their data contracts and deliver at least some basic version of such contract for legacy data sets that still hold value but are not actively being maintained, ... I can see quite a few areas where this stuff can help us.

1
0
0

@b0rn_dead What error rate with the data are you comfortable with?

0
1
0

@catsalad @mttaggart

i would like generative AI used to produce all pitches to VCs.

they love AI so much, let them wade through hallucinations and burn their funds pointlessly.

0
1
0

@mttaggart I would be very curious to see results to the same question on bsky/x/linkedin.
I'm not too surprised with the results here but how strong is tge bias? I guess linkedin could show something opposite, but with 100x more voters maybe?

1
0
0

@mttaggart @catsalad They do probably occasionally use LLMs, but not to the extent that $BigCorp wants everybody to do so.

1
0
0

@JmbFountain @catsalad The use is very clear and tracked carefully—especially by groups like Mandiant (Google). They own Gemini, after all.

It's phishing lures and script kiddies writing bad malware. The scary people aren't bothering.

1
1
0

@mttaggart @catsalad

One would think so, but the seed of rot seems to have already manifested

0
1
0

@mttaggart @catsalad My best guess is that they use it for things only tangentially related to code. So stuff like CSS, phishing mail texts etc. I don't expect them to use stuff like that for any relevant logic.

1
0
0

@JmbFountain @mttaggart @catsalad why even use it for CSS if your goal is to correctly impersonate someone? it’s actually easier and way more reliable to either copy it 1:1 from a target or write something yourself

no point using an AI that could fuck up a campaign worth millions when CSS isn’t even that hard to begin with…

1
0
0

@mttaggart @catsalad I also want our adversaries to be able to use AI for vibe coding. Is there somewhere we can donate for their agentic AI accounts?

1
1
0

@avuko @catsalad No need; they can look for API keys on GitHub just fine

1
1
0

@mttaggart @SecurityWriter It’s not that I don’t care, but the answer is nuanced; there are some good use cases and a lot of terrible ones. I will use the good ones and ignore the rest. I hope that over time society gets better at determining the difference. I will be incredibly annoyed when I have bad use cases imposed on me.

1
0
0

@tuckerjj @SecurityWriter There is absolutely no good use case that justifies the known harms, and there are so many we have yet to discover.

1
1
0

@mttaggart @tuckerjj AND a good use case doesn’t come combined with bad ones. That makes it a bad use case.

Machine learning and small language model extensions are effective, and efficient… and more importantly, application specific.

A general use do-everything LLM is a pipe dream, and we’ll boil the oceans and raze the forests before we realise any benefits.

Might be different if we were 10-15 hardware generations down the road and hadn’t decided to use the public to alpha test one of the most inefficient technologies known to man.

2
1
0

@SecurityWriter @tuckerjj Ah, sorry, I thought this was a reply to another post about, y'know, the harms.

0
1
0

@buherator @mttaggart Translation is only a valid use case if you don't much care about the end result and aren't bothered about your users knowing that.

1
0
0

@SecurityWriter @mttaggart I’m thinking about the cases where you have huge amounts of content, too much to manually digest, and you have specific questions about the content where a fuzzy but broadly correct answer is sufficient. These are the use cases where LLMs seem to work well.

2
0
0

@tuckerjj @mttaggart The thing is, you don’t need an LLM to do this. It’s much more efficient to use other well established technologies.

They don’t do fuzzy but broadly correct either, they’ll do what you ask them.

They just don’t typically talk to you like you’re its friend, or output like someone else wrote it.

0
1
0

@SecurityWriter @mttaggart I believe that it should be possible to simulate the human brain (simulate, not emulate) but LLMs alone do not and cannot do this. Other components are needed. Whether we should or not is of course a massive ethical and philosophical question, and the harms currently being caused LLMs by are for sure not ethical or acceptable.

1
0
0

@tuckerjj @mttaggart this is the slippery that the AI Bros want you to go down.

LLMs have nothing in common with artificial intelligence. But they’re being sold as the gateway to it. They are not.

0
1
0
@janeishly @mttaggart by translation I mean the level of G translate&co, that we know from practice are useful. They shouldn't be used to translate e.g. full books of course.
0
0
1