Interesting. A useful model for other industries?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4488199
Embracing Artificial Intelligence in the Legal Landscape: The Blueprint
This innovative case study outlines a blueprint for strategic transformation based on the example of a real-life law firm operating in Germany, using AI tools and digitalization. Leveraging Kotter's 8-step change model, the research underscores the imperative to adopt AI due to pressing market competition and escalating internal costs. The paper articulates how AI can optimize legal processes and dramatically improve efficiency and client satisfaction, while addressing the firm's readiness to adapt and potential resistance.
By building a coalition of key stakeholders and envisioning the firm's future as a technology-driven entity, this research elucidates a pragmatic roadmap for the firm's digital journey.
The conclusion suggests a pivotal shift toward a culture that celebrates change and fosters growth, strengthening the firm's competitive position and enabling sustainable success in the ever-evolving legal landscape.
Does all the training data have to pass an ethics review before it can be used safely?
https://link.springer.com/article/10.1007/s00429-023-02662-7
The human cost of ethical artificial intelligence
Foundational models such as ChatGPT critically depend on vast data scales the internet uniquely enables. This implies exposure to material varying widely in logical sense, factual fidelity, moral value, and even legal status. Whereas data scaling is a technical challenge, soluble with greater computational resource, complex semantic filtering cannot be performed reliably without human intervention: the self-supervision that makes foundational models possible at least in part presupposes the abilities they seek to acquire. This unavoidably introduces the need for large-scale human supervision—not just of training input but also model output—and imbues any model with subjectivity reflecting the beliefs of its creator. The pressure to minimize the cost of the former is in direct conflict with the pressure to maximise the quality of the latter. Moreover, it is unclear how complex semantics, especially in the realm of the moral, could ever be reduced to an objective function any machine could plausibly maximise. We suggest the development of foundational models necessitates urgent innovation in quantitative ethics and outline possible avenues for its realisation.
Tools & Techniques. ChatGPT responds after searching for ‘appropriate’ chunks of data in its training data. If you asked for a ‘new recipe for cornbread’ there might be hundreds of articles with new in the title. How would it ever come up with something really new?
https://www.businessinsider.com/how-to-use-get-better-chatgpt-ai-prompt-guide
12 ways to get better at using ChatGPT: Comprehensive prompt guide
… ChatGPT doesn't always produce desirable outcomes, and the tech can be prone to errors and misinformation.
It all comes down to the prompts users put into ChatGPT.
"If you really want to generate something that is going to be useful for you, you need to do more than just write a generic sentence," Jacqueline DeStefano-Tangorra, a consultant who uses ChatGPT to secure new contracts, told Insider.
No comments:
Post a Comment