中 | EN
  • ABOUT
    ABOUT BR
  • TEAM
    PATENT ATTORNEYS TRADEMARK ATTORNEYS
  • NEWS
  • LAWS
    PATENT TRADEMARK COPYRIGHT FORMS
  • CAREERS
  • CONTACT

HOME > NEWS > NEWS REPORTS

Date: November 18,  2025 Date: 2025年November 18 Source:  IPRdaily

Intellectual Property Protection for AI Model Architecture and Parameters

The architecture and parameters of an artificial intelligence model are its core components, and intellectual property law can provide corresponding legal protection. In the author’s view, discussion of IP protection for AI model architecture and parameters should begin from their intrinsic characteristics, determine whether they can be protected under specialized IP statutes, and then consider whether the Anti-Unfair Competition Law may apply. Systemic reasoning should be employed to properly handle the relationship between specialized IP laws and the Anti-Unfair Competition Law.


Technical Overview of AI Model Architecture and Parameters


(1) Architecture of an AI Model

The architecture determines an AI model’s computational mechanisms and processing capabilities. AI models adopt diverse architectures, but they share a common design principle: using a specific structure and large-scale data to learn the relationship between input and output. Depending on their tasks and functional requirements, model architectures differ, but most contain core components such as an input layer, processing layers, output layer, loss function, and optimization algorithm.


The input layer can be viewed as a complex and delicate preprocessing pipeline. Its core function is to convert human-readable, unstructured raw data into numerical, structured initial vector representations that the model can understand. Through the input layer, disordered raw data is transformed into machine-digestible information.


The processing layers convert the initial vector representations into high-level representations rich in semantic information and prepare them for final output. A model’s processing capability largely depends on the number of such layers; model architecture consists of stacked processing layers from lower to higher levels. Through progressive transformation, the model constructs increasingly abstract understandings of the input, ultimately enabling sophisticated comprehension, reasoning, and generation.


The output layer is the final component of the model pipeline. Its core responsibility is to convert the high-dimensional, semantically enriched hidden states into the desired final form. As the bridge between internal representations and downstream tasks, its design is task-specific. Different generation strategies influence model behavior, balancing determinacy, creativity, and coherence.

A loss function guides the model’s learning direction by measuring the discrepancy between predicted and true values. The goal of training is to minimize this loss. While loss functions differ by model type, their core idea is to align predicted distributions as closely as possible with real data distributions.


The optimization algorithm determines how model parameters are updated based on feedback from the loss function. During pre-training, the model learns general representations from massive unlabeled data, requiring optimization methods that are efficient, stable, and computationally scalable. After pre-training, parameters must be fine-tuned for specific tasks or human alignment. Modern efficient fine-tuning strategies freeze most pre-trained parameters and train only newly introduced ones.


(2) Parameters of an AI Model

Parameters are also core components that determine how a model generates outputs from inputs. Parameters are learned during training through optimization algorithms. The purpose of training is to identify optimal parameter values that minimize the loss function. Parameters may be imagined as numerous internal “knobs”; training continuously adjusts these knobs so that the model produces expected outputs given certain inputs. Once training is complete, parameters become fixed numerical values that encode all representations and knowledge learned during training.


In common neural networks, weights and biases are the primary types of parameters. As data passes through layers, it is multiplied by weights, added to biases, and typically processed by nonlinear activation functions. This continues layer by layer until the final output is produced. In short, both the number and specific values of parameters are decisive for model performance—they determine how input data is combined, transformed, and ultimately used to generate predictions.


IP Protection for Model Architecture and Parameters


(1) Patent Protection

Patent offices in major jurisdictions, including China, have already granted numerous patents related to AI model architectures. These include device and method patents covering architecture design, improvement, and application. This demonstrates that model architectures and methods for optimizing them can be patented. Likewise, patent offices have granted many patents involving model parameters, mainly method patents focusing on parameter optimization, compression, updating, and management. This shows that methods of training or adjusting parameters can be patented.


Thus, model architectures and improvements, as well as methods involving model parameters, constitute technical solutions and satisfy patentable-subject-matter requirements. If they further meet the standards of novelty, inventiveness, and utility, patent protection is available.


(2) Copyright Protection

AI model developers use human-readable programming languages to define the layer structures and inter-layer connections that constitute the model architecture, resulting in source code that embodies that architecture. Thus, the source program defining the architecture is the expression and carrier of the model architecture. Such source code clearly falls under the category of “computer software” expressly protected by copyright law. Under the Regulations on Computer Software Protection, software includes programs and related documentation; source code and object code for the same program constitute a single work. Therefore, the source code, object code, and associated development documentation defining the model architecture may all receive copyright protection.


By contrast, parameters are automatically derived numerical values produced by optimization algorithms. After training, each parameter is fixed and stored in a data file in one-to-one correspondence with the model’s parameters. This file does not embody ideas or creative expression by a developer. Thus, the parameter file is not a copyrighted work and cannot be protected by copyright law.


It should be emphasized that both architecture and parameters must operate together; without either, the model cannot function—just as building blocks cannot be assembled with only instructions or only blocks. Together, the program defining the architecture and the parameter file constitute a runnable software system.


(3) Trade Secret Protection

Under China’s Anti-Unfair Competition Law, trade secrets must be unknown to the public, have commercial value, and be subject to confidentiality measures. Model architecture and parameters fall within technical information—structures, algorithms, data, programs, and documentation—and clearly qualify as technical information potentially protectable as trade secrets. If they satisfy the statutory requirements, they may receive protection.


Where architecture or parameters fail any statutory element—e.g., if known in the field, publicly obtainable, or disclosed without confidential measures after market release—they cannot be protected as trade secrets.


Analysis of the Application of Article 2 of the Anti-Unfair Competition Law

Article 2 is the general clause of the Anti-Unfair Competition Law. As discussed, patent law, copyright law, and trade secret provisions can protect eligible model architectures and parameters. The following sections explore how these regimes should be applied in relation to one another.


(1) Relationship Between the Anti-Unfair Competition Law and Patent Law

Model architecture and parameter-related methods fall within patent-eligible subject matter. But patentability is merely a threshold: inventors must submit patent applications. If an inventor applies for and obtains a patent, unlicensed use by others constitutes infringement.


If an inventor does not apply or applies but fails to obtain a patent, and then introduces the model to the market, others’ use of the model does not constitute patent infringement. Applying Article 2 in such circumstances would effectively protect unpatented or rejected inventions—undermining the patent system by granting patent-like protection despite lack of application or authorization. Therefore, Article 2 should not be used to regulate such conduct.


(2) Relationship Between the Anti-Unfair Competition Law and Copyright Law

As noted, model architecture appears as developer-authored source code and is protected as computer software. For two models with identical or substantially similar architecture defined in the same programming language, their source code will likewise be identical or substantially similar. Whether this constitutes copyright infringement depends on specific analysis.


If both models were independently developed—even if similar in architecture—each developer holds copyright in their own source program. Similarity alone does not imply infringement. In such cases, Article 2 should also not be applied; doing so would contradict copyright principles.


If Model B’s architecture is extracted or lightly modified from Model A obtained through public channels, then B’s source or object code must be identical or substantially similar to A’s. Extracting A effectively copies A’s source (if open-source) or object code (if closed-source), constituting at least reproduction infringement. Copyright law sufficiently addresses this scenario, and Article 2 need not apply. Conversely, if A’s developer has no copyright in the code, copying does not infringe—and Article 2 also should not be invoked, as this would contradict copyright principles.


(3) Relationship Between Article 2 and Trade Secret Provisions

As discussed, architecture and parameters are technical information. If they satisfy trade secret requirements, protection applies; if not, their use does not constitute misappropriation. Applying Article 2 to regulate use of information that is not a trade secret would effectively grant trade-secret-like protection to non-secret information—undermining the entire trade secret system. Therefore, where architecture and parameters fail to meet the statutory requirements and are acquired from public sources, Article 2 should not apply.


Systemic Application of Article 2 Under the Anti-Unfair Competition Law

Legal norms must not be interpreted or applied in isolation. Only by considering the legal system as a whole can contradictions and inconsistencies be avoided. When multiple legal norms appear applicable, systemic reasoning and conflict-of-laws principles—lex specialis derogat legi generali, higher-ranking law prevails over lower-ranking law, and newer law prevails over older law—should determine priority. Interpretation must follow the “anti-redundancy rule”: no interpretation may render another provision, legal institution, or statute superfluous.


As discussed, specialized IP law can protect model architecture and parameters when conditions are met. If they fail to satisfy these conditions, Article 2 should not be invoked, as doing so would violate the anti-redundancy rule.


Article 2 stands in a general-to-specific relationship with the trade secret provisions of the Anti-Unfair Competition Law. Under the principle that a special provision prevails over a general one, the trade secret provisions should apply first. If architecture or parameters do not qualify as trade secrets, Article 2 should not be used to regulate their use; otherwise this would violate both the anti-redundancy rule and the lex specialis principle.

Previous: Guangdong: Leading Innovation, Riding the Wave

Next: Consideration of Absolute Dimensions and Relative Proportional Relationships in Design Patent Comparison

  • Room 1817, 18th V. Heun Building, 138 Queen's Road Central, Central, Hong Kong, 999077, China

  • Tel.: +86-020-36685619

    E-mail: mail@bairuiip.com

    Mon.-Fri.: 9:00 a.m. to 6:00 p.m.

Copyright© 2003-2025 Bairui Patent & Trademark Office All Rights Reserved.