Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Samhita Adhyapak, Shivani Nair, Prof. Hrishikesh Mogare
DOI Link: https://doi.org/10.22214/ijraset.2024.63423
Certificate: View Certificate
The integration of artificial intelligence (AI) into software development has introduced significant privacy and security challenges. AI-based code understanding systems, such as OpenAI Codex and GitHub Copilot, have been found to generate code snippets containing sensitive information, leading to potential unauthorized access and misuse. This study aims to provide a comprehensive understanding of the privacy and security risks associated with these systems and propose effective mitigation strategies. The literature review examines the foundational principles and techniques behind AI-based code understanding systems, including machine learning algorithms, training datasets, and code analysis methods. The assessment of risks involves analysing scholarly articles, industry reports, and case studies that discuss the privacy and security risks inherent in these systems. The case study analysis presents real-world examples of privacy and security incidents involving AI-based code understanding systems. Notable cases include OpenAI Codex inadvertently generating code snippets containing sensitive information and GitHub Copilot sometimes generating code that directly copied from copyrighted sources in its training data. The risk assessment categorizes and evaluates the potential privacy and security risks associated with AI-based code understanding systems. The proposed mitigation strategies include data anonymization, access controls, audit mechanisms, encryption, and adherence to legal and ethical frameworks. The implementation of these strategies involves pre-processing datasets to ensure they do not contain any sensitive information before training AI models. Robust access controls are designed to regulate who or what can view or use resources within a computing environment. Audit mechanisms involve tracking and recording activities within systems to ensure appropriate use. Encryption ensures that even if data is accessed without authorization, it remains unintelligible and secure. Legal and ethical frameworks provide structured guidelines to govern the responsible use of AI and the protection of sensitive data. Components of these frameworks include policies and procedures, training and awareness, and compliance programs. By implementing these comprehensive mitigation strategies, organizations can significantly reduce the privacy and security risks associated with AI-based code understanding systems. This ensures that the benefits of AI in software development are realized without compromising the privacy and security of sensitive code-related data.
I. INTRODUCTION
The advent of AI-based systems has revolutionized numerous domains, including software development [1]. These AI and machine learning (ML) systems, which can analyze, interpret, and generate code snippets, have significantly enhanced productivity and efficiency. Notable examples include OpenAI’s Codex and GitHub Copilot, which are capable of understanding natural language prompts and providing corresponding code suggestions. These advancements promise to streamline workflows, assist in debugging, and even educate novice programmers. However, alongside these benefits, there are substantial privacy and security risks that must be meticulously addressed. AI-based code understanding systems typically require extensive datasets for training. These datasets often comprise publicly available code from various sources, including open-source repositories and code hosting platforms. However, this practice raises critical concerns about the inadvertent inclusion of sensitive or proprietary information. For instance, during the training process, these AI models might encounter and subsequently reveal passwords, API keys, or proprietary algorithms embedded in the training data. Such exposures can have severe implications, including unauthorized access to systems, intellectual property theft, and overall data breaches. The potential exposure of sensitive information is not merely a theoretical concern but has been evidenced in practical scenarios. For instance, there have been instances where GitHub Copilot suggested proprietary code snippets that should not have been publicly accessible. These incidents underscore the pressing need for robust data protection mechanisms and comprehensive risk assessments in the deployment of AI based code understanding tools.
Beyond the risk of exposing sensitive information, there are broader security implications. AI systems can be targeted by malicious actors aiming to exploit vulnerabilities within these tools. If an AI system used for code analysis is compromised, it can provide attackers with access to a wealth of confidential information. Moreover, the misuse of AI-generated code snippets, which might include flawed or insecure code, can introduce new vulnerabilities into software systems. This paper aims to provide a thorough analysis of the privacy and security risks associated with AI based code under- standing systems. It will explore specific case studies, assess the potential impacts of these risks, and propose practical approaches to mitigate them. By addressing these concerns, we can ensure that the benefits of AI in software development are realized without compromising the privacy and security of code-related data. In the following sections, we will delve deeper into the existing literature on AI-based code understanding, outline the specific privacy and security risks identified, and propose strategies to safeguard sensitive information and proprietary code. Through this comprehensive analysis, we aim to contribute to the development of more secure and privacy-conscious AI tools in the software development industry. In recent years, artificial intelligence (AI) and machine learning (ML) have revolutionized the field of software development by offering advanced tools for code analysis and enhancement. These AI-powered tools can provide context-aware code completion suggestions, assist in code review by flagging potential issues, and scan code for security vulnerabilities such as SQL injection risks and data leaks. However, the integration of AI in code understanding systems also introduces significant privacy and security risks, particularly related to the potential exposure of sensitive or proprietary information. This paper aims to analyze these risks and propose effective approaches to protect code-related data from unauthorized access or misuse. By examining specific case studies, assessing the potential impacts, and proposing practical approaches to mitigate these risks, we can ensure that the benefits of AI in software development are realized without compromising the privacy and security of code related data.
II. AI BASED CODE UNDERSTANDING
AI-based code understanding includes leveraging machine learning calculations to decipher and create code proficiently [2]. These frameworks are outlined to comprehend code sentence structure, semantics, and rationale, empowering them to perform a assortment of errands that customarily required human mastery. The development and arrangement of these AI frameworks have been driven by the expanding complexity of program, the require for quick improvement cycles, and the multiplication of expansive codebases.
A. Key Components and Techniques
B. Applications
???????C. Advantages
???????D. Challenges
III. METHODOLOGY
The methodology of this research is comprehensive. Analyse applications with existing AI models and their code Thank you for your understanding [3]. This study uses a combination of the following, Quantitative and qualitative methods with literature synthesis Provide review, case study analysis and risk assessment Understand the problem thoroughly and make suggestions Effective mitigation strategies by implementing mixed methods Approaches to evaluate the effectiveness of different artificial intelligence techniques [3]
???????A. Literature Review
The writing survey includes a orderly examination of existing investigate on AI-based code understanding, information protection, and security. Key goals include:
???????B. Case Ponder Analysis
Case consider investigation gives real-world illustrations of security and security episodes including AI-based code understanding frameworks. This involves:
C. Risk Assessment
The hazard appraisal points to categorize and assess the potential protection and security dangers related with AI-based code understanding frameworks. This involves:
???????D. Proposing Relief Strategies
Based on the discoveries from the writing audit, case consider examination, and hazard appraisal, this consider proposes a set of procedures to relieve security and security dangers. These methodologies include:
IV. ANALYSIS
The integration of AI in code understanding presents challenges such as information protection and security concerns [4] The investigation segment digs into the point by point examination of protection and security dangers related with AI-based code understanding frameworks.
This incorporates investigating case considers of security breaches, evaluating the potential impacts of distinguished dangers, and assessing the viability of proposed relief strategies
Case Ponders of Security Breaches
a. Incident Diagram: Occurrence Diagram: OpenAI Codex, the AI demonstrate behind GitHub Copilot, confronted investigation for incidentally producing code pieces containing sensitive data. Occasions were detailed where Codex proposed code that included passwords, API keys, and restrictive calculations implanted in its preparing data.
b. Causes and Consequences: Data Incorporation: Codex was prepared on freely accessible code stores, a few of which contained inserted touchy data that was not satisfactorily sanitized. Introduction Hazard: Engineers utilizing Codex gotten suggestions that included this delicate data, possibly driving to unauthorized get to and misuse.
c. Impact: The introduction of touchy data might result in security breaches, money related misfortunes, and harm to the notoriety of organizations whose exclusive in- arrangement was leaked.
d. Response and Mitigation
2. Case Study 2: GitHub Copilot
a. Incident Diagram: GitHub Copilot, another AI-based code understanding instrument, was found to sometimes generate code bits that were coordinate duplicates of copyrighted code from its preparing information. This raised concerns around mental property robbery and legitimate implications.
b. Causes and Consequences: Training Information Composition: Copilot was prepared on a expansive corpus of freely accessible code, which included code with prohibitive licenses.
c. Intellectual Property Chance: The apparatus produced recommendations that reflected the correct code from the preparing information, abusing copyright laws and uncovering clients to legitimate risks.
d. Impact: The utilize of copyrighted code without legitimate attribution or authorizing might lead to lawful activities against engineers and organizations.
e. Response and Relief: Information Curation: GitHub introduced measures to make strides the curation of preparing information, centering on compliance with authorizing terms. Transparency and Attribution: Endeavors were made to provide clearer attribution and utilization rules for created code snippets
B. Risk Assesment
a. Technical Dangers: Incidental presentation of delicate in- arrangement (e.g., passwords, API keys), era of uncertain or imperfect code, unauthorized get to due to deficiently security measures.
b. Legal Dangers: Mental property encroachment, noncompliance with information assurance regulations.
c. Ethical Dangers: Inclination in code proposals, propagation of terrible coding hones, need of straightforwardness in AI decision making processes.
2. Impact Analysis
a. High Affect: Presentation of delicate data can lead to serious security breaches, budgetary misfortunes, and reputational harm. Mental property burglary can result in legitimate debate and critical penalties.
b. Medium Affect: Era of unreliable code can intro- duce vulnerabilities into program, requiring extra assets for code survey and debugging.
c. Low Affect: Inclination in code proposals may not posture quick dangers but can contribute to long-term issues in computer program quality and differences in coding practices.
V. MITIGATION STRATEGIES FOR DATA PRIVACY AND SECURITY RISKS IN AI-BASED CODE UNDERSTANDING
The integration of AI-based code understanding systems into software development introduces significant privacy and security challenges. To address these issues, it is essential to implement robust strategies focusing on data anonymization, access controls, audit mechanisms, encryption, and adherence to legal and ethical frameworks. This section outlines comprehensive mitigation strategies to protect sensitive and proprietary code-related data.
A. Data Anonymization
Data anonymization is a crucial step in ensuring that sensitive information within training datasets is not exposed [1] [2]. This process involves transforming data in a manner that prevents the identification of individuals or sensitive details. Techniques such as data masking, tokenization, pseudonymization, and redaction are employed to anonymize sensitive information. Data masking replaces sensitive information with fictional but realistic data, such as substituting actual API keys and passwords with placeholder text. Tokenization replaces sensitive data elements with non-sensitive tokens that can be mapped back to the original data through a secure tokenization system. Pseudonymization converts private identifiers into pseudonyms or artificial identifiers, changing usernames and email addresses to generic identifiers.
Redaction completely removes or obscures sensitive information from datasets, strip- ping out proprietary code sections or comments containing sensitive information before using them to train AI models. Implementation of these techniques involves pre-processing datasets to ensure they do not contain any sensitive information before training AI models. Regular audits are conducted to ensure that anonymization processes are effectively applied and maintained over time. Anonymization significantly re- duces the risk of exposing sensitive information, making it more difficult for unauthorized parties to misuse the data. Adhering to data anonymization techniques also helps comply with data protection regulations such as GDPR and CCPA, which mandate the protection of personal data.
B. Access Controls
Access controls are security measures designed to regulate who or what can view or use resources within a computing environment. Implementing stringent access controls is critical to prevent unauthorized access to AI systems and sensitive code repositories. Techniques such as role-based access control (RBAC), multi-factor authentication (MFA), and the least privilege principle are employed to strengthen authentication and authorization mechanisms. RBAC assigns permissions to users based on their role within the organization, ensuring that only authorized personnel have access to sensitive data and systems. MFA requires multiple forms of verification for access, such as a combination of passwords, security tokens, and biometric verification. The least privilege principle ensures that users are granted the minimum levels of access necessary to perform their job functions, thereby limiting the potential for misuse. Implementation of these techniques involves clearly defining user roles and their associated access permissions and implementing these roles within the access control systems. Periodic reviews and updates of user roles and permissions are conducted to ensure they remain appropriate as organizational roles change and employees join or leave the organization. Security policies mandating the use of strong passwords, MFA, and other access control measures are developed and enforced. Robust access controls prevent unauthorized individuals from accessing sensitive data and systems, thereby reducing the risk of data breaches. Regular reviews and updates to access controls help maintain a strong security posture in the face of evolving threats.
C. Audit Mechanisms
Audit mechanisms involve tracking and recording activities within systems to ensure appropriate use and to detect and respond to security incidents promptly. Techniques such as logging, monitoring, and alerting are employed to establish comprehensive logging and monitoring systems. Logging in- volves implementing comprehensive logging of all access and activities on systems that handle sensitive data. Logs capture details such as user identity, timestamp, and the nature of the activity. Monitoring involves using automated tools to continuously monitor logs for suspicious activities, such as unauthorized access attempts or unusual patterns of behavior. Alerting involves setting up real-time alerts to notify administrators of potential security incidents, allowing for immediate investigation and response. Implementation of these techniques involves deploying log management systems to collect, store, and analyze log data securely, with robust search and analysis capabilities. Clear audit trails documenting all activities related to data access and use are established, ensuring these trails are tamper-proof and retained for an appropriate period. Regular audits of logs and audit trails are conducted to ensure compliance with security policies and to identify areas for improvement. Audit mechanisms provide transparency into system usage and hold users accountable for their actions, deterring malicious behavior. Effective logging and monitoring enable rapid detection and response to security incidents, minimizing potential damage.
D. Encryption
Encryption is the process of converting data into a code to prevent unauthorized access, ensuring that data remains secure even if it is intercepted or accessed without authorization. Implementing robust encryption practices can further enhance data security [5], Techniques such as data-at-rest encryption, data-in-transit encryption, and end-to-end encryption are employed to protect data. Data-at-rest encryption involves encrypting data stored on disks or other storage media, including code repositories and training datasets. Data-in-transit encryption involves encrypting data transmitted across networks using secure protocols such as HTTPS, TLS, and VPNs. End-to-end encryption ensures data is encrypted at all stages of transmission, from the sender to the recipient, without being decrypted at intermediate points. Implementation of these techniques involves using strong, industry-standard encryption algorithms such as AES-256 for data-at-rest and TLS 1.2/1.3 for data-in-transit. Robust key management practices, including secure generation, distribution, storage, and rotation of encryption keys, are implemented. Encryption practices are ensured to comply with relevant regulations and standards, such as FIPS 140-2 and GDPR. Encryption ensures that even if data is accessed without authorization, it remains unintelligible and secure. Many data protection regulations mandate the use of encryption to protect sensitive information, helping organizations achieve compliance.
E. Legal and Ethical Frameworks
Legal and ethical frameworks provide structured guidelines to govern the responsible use of AI and the protection of sensitive data, ensuring that AI systems are used ethically and lawfully. Components of these frameworks include policies and procedures, training and awareness, and compliance programs. Comprehensive policies and procedures outlining the ethical and legal requirements for using AI in code understanding are developed, including data usage, privacy protections, and regulatory compliance. Employees and developers are educated on the legal and ethical implications of using AI, emphasizing the importance of data protection and responsible AI practices. Programs to ensure compliance with data protection regulations such as GDPR, CCPA, and other relevant laws are established, including regular audits and assessments. Implementation of these components involves collaborating with legal and compliance experts to develop policies addressing the specific risks and requirements associated with AI-based code understanding. Ethics committees are formed to oversee the development and deployment of AI systems, ensuring that ethical considerations are integrated into decision-making processes. Regular training sessions for employees and developers are conducted to raise awareness about the importance of data protection and the ethical use of AI. Legal and ethical frameworks provide a structured approach to managing the risks associated with AI, ensuring that data protection is prioritized. Adhering to legal and ethical guidelines helps build trust with customers, partners, and regulators, enhancing the organization’s reputation. By implementing these comprehensive mitigation strategies, organizations can significantly reduce the privacy and security risks associated with AI-based code understanding systems. This ensures that the benefits of AI in software development are realized without compromising the privacy and security of sensitive code-related data.
VI. PROPOSED APPROACHES FOR DATA PROTECTION IN AI-BASED CODE UNDERSTANDING
The expanding selection of AI-based code understanding frameworks requires vigorous approaches to information security to moderate protection and security dangers. This segment investigates de- followed techniques to improve information protection and security, centering on neighborhood AI utilization, centralized mystery capacity, thorough re- see of AI-generated code, dynamic code checking, reliance administration, overhauling AI-generated code, and secure coding standards.
A. Consider Neighborhood or On-Device AI
Using nearby or on-device AI models can altogether enhance information protection by keeping touchy information and computational forms on the user’s gadget or maybe than transmitting them to cloud servers. This approach decreases the chance of information presentation amid travel and minimizes reliance on outside cloud suppliers. Nearby AI instruments such as Pieces and on-device models like Llama 2 represent this technique. Pieces can create context-aware code scraps, improving developer efficiency whereas guaranteeing that information remains on-device. Essentially, Llama 2 offers capable on-device AI capabilities for code era and investigation, keeping up arrange security and lessening potential vulnerabilities related with cloud capacity. By leveraging nearby AI models, organizations can beyond any doubt that delicate data is handled safely inside the nearby environment, diminishing the assault surface for malevolent actors.
B. Store Privileged insights in a Central Location
Centralized capacity arrangements for delicate data, such as API keys, passwords, and encryption keys, are fundamental to mitigate security dangers. These centralized stores ought to utilize vigorous encryption procedures and multifactor authentication (MFA) to secure put away insider facts from unauthorized get to. Apparatuses like HashiCorp Vault and AWS Privileged insights Supervisor give secure capacity arrangements, guaranteeing that privileged insights are scrambled both at rest and in travel. Executing centralized capacity not as it were streamlines mystery administration but too improves security by implementing steady get to controls and reviewing capabilities. Standard reviews and observing of get to to the centralized capacity can offer assistance distinguish and react to unauthorized get to endeavors, guaranteeing the continuous security of delicate information.
C. Review AI-Generated Code for Security Blemishes and Awful Logic
While AI-generated code can essentially upgrade createment effectiveness, it must be altogether surveyed for potential security blemishes and coherent mistakes. Human oversight is vital to guarantee that AI-generated code follows to secure coding practices and does not present vulnerabilities. Devices like Pieces can produce more contextualized and important code scraps, but designers must scrutinize the produced code to guarantee its rightness and security. Secure code hones incorporate utilizing inactive investigation apparatuses to distinguish vulnerabilities, performing peer surveys, and keeping conditions up-to-date.
Standard code surveys ought to include checking for common security issues such as infusion assaults, uncertain setups, and dishonorable mistake dealing with. By joining AI-generated code audits into the advancement workflow, organizations can keep up tall security and code quality standards.
D. Incorporate Dynamic Code Checking Tools
Active code filtering instruments, such as SonarQube and Checkmarx, are crucial for naturally recognizing vulnerabilities and terrible hones in code. These apparatuses ought to be coordinates into the improvement pipeline to distinguish and address security issues early in the improvement prepare. SonarQube gives persistent assessment of code quality, highlighting security vulnerabilities, bugs, and code smells. Checkmarx offers comprehensive security filtering, distinguishing issues such as SQL infusion, cross-site scripting (XSS), and uncertain deserialization. Coordination these devices into the CI/CD pipeline guarantees that code is ceaselessly checked for vulnerabilities, anticipating imperfect code from coming to generation. Computerized looks ought to be complemented by manual surveys to guarantee comprehensive scope and address any untrue positives.
E. Review Dependencies
Third-party conditions can present critical security dangers if not legitimately overseen. It is fundamental to carefully survey and overhaul conditions to guarantee they are secure and up to date. Apparatuses like Snyk Open Source can offer assistance recognize capacities in open-source conditions and recommend fixes. Snyk coordinating with advancement instruments and workflows, giving real-time defenselessness appraisals and remediation counsel. Furthermore, bundle supervisors like npm and pip, combined with robotized instruments, can help in overseeing reliance upgrades. Frequently overhauling conditions diminishes the hazard of vulnerabilities being misused in generation situations. Engineers ought to too screen security advisories for their conditions and apply patches instantly to relieve potential risks.
F. Identify Required Overhauls in AI-Generated Code
AI-generated code must be cross-referenced with the most recent forms of libraries and APIs to guarantee it is up-to-date and secure. Robotized instruments and manual checks can offer assistance keep up the security and proficiency of the extend. Devices like Dependabot and Redesign can consequently identify out- dated conditions in AI-generated code and create drag demands to overhaul them. Manual surveys ought to confirm that the AI generated code adjusts with the most recent best hones and security rules. Keeping AI-generated code up-to-date with the most recent libraries and APIs guarantees that it benefits from later security advancements and execution upgrades, lessening the hazard of vulnerabilities.
G. Learn around Secure Coding Standards
Adhering to secure coding measures is crucial to creating secure and strong computer program. Assets like the OWASP Best Ten give a comprehensive list of common security dangers and rules for moderating them. Intelligently preparing stages like Secure Code Warrior offer hands-on works out to offer assistance engineers get it and actualize secure coding hones. A intensive secure code survey handle, including numerous group individuals, can assist improve code security by guaranteeing that all potential vulnerabilities are identified and tended to. By cultivating a culture of security inside the advancement group, organizations can essentially diminish the hazard of security breaches. In conclusion, executing these point by point procedures can successfully relieve the information security and security dangers related with AI-based code understanding frameworks. By leveraging neighborhood AI models, securing centralized capacity, thoroughly investigating AI-generated code, joining dynamic code filtering apparatuses, overseeing conditions, overhauling AI-generated code, and following to secure coding guidelines, organizations can guarantee the secure and productive utilize of AI in program advancement. These approaches not as it were upgrade the security pose of AI frameworks but moreover construct believe with stake- holders by illustrating a commitment to information security and moral AI practices.
The integration of artificial intelligence (AI) into software development has introduced significant privacy and security challenges. AI-based code understanding systems, such as OpenAI Codex and GitHub Copilot, have been found to generate code snippets containing sensitive information, leading to potential unauthorized access and misuse. This study aimed to provide a comprehensive understanding of the privacy and security risks associated with these systems and propose effective mitigation strategies. The literature review examined the foundational principles and techniques behind AI- based code understanding systems, including machine learning algorithms, training datasets, and code analysis methods. The assessment of risks involved analyzing scholarly articles, industry reports, and case studies that discuss the privacy and security risks inherent in these systems. The case study analysis presented real-world examples of privacy and security incidents involving AI-based code understanding systems. No- table cases included OpenAI Codex inadvertently generating code snippets containing sensitive information and GitHub Copilot sometimes generating code that directly copied from copyrighted sources in its training data. The risk assessment categorized and evaluated the potential privacy and security risks associated with AI-based code understanding systems. The proposed mitigation strategies included data anonymization, access controls, audit mechanisms, encryption, and adherence to legal and ethical frameworks. Implementing these comprehensive mitigation strategies is crucial to protect sensitive and proprietary code-related data from unauthorized access or misuse. Data anonymization ensures that sensitive information within training datasets is not exposed. Access controls regulate who or what can view or use resources within a computing environment. Audit mechanisms involve tracking and recording activities within systems to ensure appropriate use. Encryption ensures that even if data is accessed without authorization, it remains unintelligible and secure. Adhering to legal and ethical frameworks provides structured guidelines to govern the responsible use of AI and the protection of sensitive data. By implementing these strategies, organizations can significantly reduce the privacy and security risks associated with AI-based code understanding systems. This ensures that the benefits of AI in software development are realized without compromising the privacy and security of sensitive code-related data.
[1] J. Klemmer, S. A. Horstmann, N. Patnaik, C. Ludden, C. Burton, C. Powers, F. Massacci, A. Rahman, D. Votipka, H. Lipford, A. Rashid, A. Naiakshina, S. F. C. H. C. F. I. Security, R.Bochum, U. Bristol, T. University, V. U. Amsterdam, U. Trento, A. University, U. O. N. C. A. Charlotte, U. O. N. C. A. Charlotte, and U. of North Carolina at Charlotte, “Using ai assistants in software development: A qualitative study on security practices and concerns,” null, 2024. [2] J. Wen, D. Yuan, L. Ma, and H. Chen, “Code ownership in open-source ai software security,” arXiv.org, 2023. [3] H. Pearce, H. Pearce, B. Ahmad, B. Ahmad, B. Tan, B. Tan, B. Dolan- Gavitt, B. Dolan-Gavitt, R. Karri, and R. Karri, “Asleep at the keyboard? assessing the security of github copilot’s code contributions,” IEEE Symposium on Security and Privacy, 2022 [4] L. Floridi, L. Floridi, J. Cowls, J. Cowls, M. Beltrametti, M. Beltram- etti, R. Chatila, R. Chatila, P. Chazerand, P. Chazerand, V. Dignum, V. Dignum, C. Luetge, C. Luetge, R. Madelin, R. Madelin, U. Pagallo, U. Pagallo, F. Rossi, F. Rossi, F. Rossi, F. Rossi, B. Scha¨fer, B. Schafer, P. Valcke, P. Valcke, E. Vayena, and E. Vayena, “Ai4people—an ethical framework for a good ai society: Opportunities, risks, principles, and recommendations,” Minds and Machines, 2018. [5] J. Ren, H. Xu, P. He, Y. Cui, S. Zeng, J. Zhang, H. Wen, J. Ding, H. Liu, Y. Chang, and J. Tang, “Copyrig protection in generativeai: Atechnicalperspective ,”arXiv .org,2024.
Copyright © 2024 Samhita Adhyapak, Shivani Nair, Prof. Hrishikesh Mogare. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET63423
Publish Date : 2024-06-23
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here