R&D Tax Incentive Registration Dispute: When a Digital Health and Fitness System Fails the “Core R&D Activities” Test Because the Claimed Experiments Were Not Proven to Have Been Conducted

Based on the authentic Australian judicial case [Applicant and Respondent (Taxation) [2025] ARTA 1813], this article disassembles the Court’s judgment process regarding evidence and law. It transforms complex judicial reasoning into clear, understandable key point analyses, helping readers identify the core of the dispute, understand the judgment logic, make more rational litigation choices, and providing case resources for practical research to readers of all backgrounds.

Chapter 1: Case Overview and Core Disputes

Basic Information

Court of Hearing: Administrative Review Tribunal
Presiding Judge: General Member C. Willis
Cause of Action: Review of findings refusing registration of claimed R&D activities under the R&D Tax Incentive scheme
Judgment Date: 17 September 2025
Core Keywords:
Keyword 1: Authentic Judgment Case
Keyword 2: R&D Tax Incentive
Keyword 3: Division 355 ITAA 1997
Keyword 4: IRD Act registration and findings
Keyword 5: “Core R&D activities” and experimentation
Keyword 6: Documentary proof and credibility of testing records

Background

The Applicant, a small Australian company, sought access to the Australian Government’s Research and Development Tax Incentive. To do that, it needed to register specific activities as eligible “core R&D activities” or “supporting R&D activities”. The project described by the Applicant was a digital Health and Fitness System, said to integrate decision support systems, artificial intelligence style algorithms, secure cloud computing, and user data captured from devices.

On paper, the claimed work sounded ambitious: algorithms for calorie consumption and calorie intake, a cloud decision support system to prevent “collisions” between diet and fitness programs, and later security and payment-related cloud components. The Respondent, a statutory administrator under the Industry Research and Development Act 1986 (Cth), ultimately issued findings that none of the registered activities for the relevant income years qualified, and those findings carried consequences for the Applicant’s tax position.

This case turned on a problem that arises repeatedly in R&DTI disputes: the law is not impressed by big ideas alone. The Tribunal’s task was to decide, on the evidence, whether the Applicant actually conducted the activities as registered in each relevant year, and if so, whether those activities satisfied the statutory definition of core and supporting R&D activities.

Core Disputes and Claims
  1. The Applicant’s claim: the registered activities across multiple years formed one cohesive project, involving experimentation and the generation of new knowledge, and should be assessed with a “whole of project” lens (even if components were spread across years).
  2. The Respondent’s position: the Tribunal must focus on the activities as registered for each income year, and the Applicant must prove those activities were conducted in that year, with evidence of experimentation, hypotheses, observations, and evaluation as required by the statutory scheme.
  3. The decisive legal focus: whether there was reliable evidence that the Applicant conducted the registered activities as claimed, particularly the experiments said to underpin them, and whether (if conducted) they met the statutory requirements in s 355-25 of the Income Tax Assessment Act 1997 (Cth) and related provisions.

Chapter 2: Origin of the Case

The story begins with a practical commercial reality: the R&D Tax Incentive can be a lifeline for small companies attempting technically risky development, because it can deliver a tax offset linked to eligible R&D expenditure. The Applicant described a project intended to bring together diet programs, fitness programs, and personal wellbeing and medical constraints into a single system that could recommend safe combinations.

The Applicant entered a service agreement with a registered research service provider. The development narrative, as presented by the Applicant, was a multi-year journey: early conceptual design and algorithm thinking, followed by documentation and testing, and ultimately a system that could be commercialised.

But the compliance architecture of the R&DTI scheme is procedural and evidentiary. Registration is a precondition, and later examination can retrospectively vary what is treated as registered. That means an Applicant who initially receives registration approval can later be required to prove, with records, that the activities were real, experimental, and conducted as described.

The conflict escalated when the Respondent moved from early compliance engagement into formal examination and findings. The Applicant perceived the process as unfair and disruptive, while the Respondent viewed it as a standard statutory inquiry: show the work, show the experiments, show how uncertainty was resolved through a systematic progression from hypothesis to testing, observation, evaluation, and conclusion.

By the time the matter returned for hearing, the dispute had matured into something larger than a tax disagreement. It became a forensic contest about documentation, version control, whether testing occurred, and whether the “deliverables” were evidence of real experimentation or merely evidence of planning and conceptualisation.


Chapter 3: Key Evidence and Core Disputes

Applicant’s Main Evidence and Arguments
  1. Registration applications describing hypotheses, experiments, observations, and conclusions for each activity in each year. For example, one activity described experiments targeting groups of individuals of different ages and health status, with controlled parameters like repeats, time of day, and weight. It then listed “observations” such as elderly people consuming more calories for the same activity, and concluded that unknown elements required combining variables into mathematical modelling.
  2. A large “R&D Manual” said to contain modules across the relevant years (conceptual design, functional specifications, detailed design, test plans, test cases, and supporting surveys).
  3. The H&F Test Manual and test forms, offered as the contemporaneous record of field experimentation.
  4. Lay evidence from key individuals involved, explaining the project’s intentions, progression, and constraints (including funding constraints).
  5. Expert evidence supporting the Applicant’s position, treating documented “deliverables” as evidence that activities were performed.
Respondent’s Main Evidence and Arguments
  1. The statutory focus is on registered activities for each income year.
  2. The Applicant must prove activities were conducted as registered, and must show reliable records of experimentation.
  3. Expert evidence from nutrition and exercise science and software engineering, expressing concern that documentation did not show experiments, did not show systematic progression, and lacked key artefacts expected if the described development and testing truly occurred.
  4. Specific critique of key documents: missing full source materials, documents appearing to contain copied external content, lack of executable software artefacts, and test case documents lacking dates or showing work outside the relevant years.
Core Dispute Points
  1. Year-by-year proof versus “whole of project” framing: can the Applicant rely on later-year material to prove earlier-year activities were conducted, and how far can the Tribunal look beyond the registered year?
  2. What counts as proof that experimentation happened: are diagrams, plans, and high-level test cases enough, or must there be contemporaneous records that actually align with the experiments described in the registrations?
  3. The credibility and reliability of the Applicant’s records, including versioning, dating, and alignment between narrative claims and what the documents objectively show.
  4. If experimentation did occur, did it address genuine technical uncertainty, or was it merely routine implementation and adaptation of known methods?

Chapter 4: Statements in Affidavits

Affidavit evidence in R&DTI cases often carries a strategic burden: it must do more than tell a story. It must connect the story to the statutory test.

In this case, the affidavits functioned as the Applicant’s bridge between big claims and hard proof. They sought to convert documents into evidence of activities by explaining what each document meant, how it related to a registered activity, and how work progressed year-to-year.

The Respondent’s challenge exposed the central vulnerability of affidavit-driven reconstruction: if the documents do not objectively demonstrate the experiments described in the registrations, affidavit explanations can look like narrative gloss. The Tribunal emphasised that volume of material is not the same as probative value. If documents do not align with the registered activity descriptions, they do not prove the activity was conducted.

Strategic intent behind procedural directions about affidavits in this kind of case is clear. The Tribunal needs the parties to identify what evidence proves what activity, in which year, and how it satisfies each element of s 355-25 and s 355-30. Without that mapping, a tribunal risks being swamped by paper while still being unable to answer the real statutory question.


Chapter 5: Court Orders

Before final hearing, the Tribunal’s procedural management in a long-running, document-heavy dispute typically includes:

  1. Directions linking proceedings so that multiple income years can be heard together where issues overlap.
  2. Orders requiring filing of updated expert reports, and sequencing lay evidence and expert evidence to ensure fairness.
  3. Timetabling for written closing submissions, including enforcement of deadlines to prevent prejudice and delay.
  4. Directions dealing with tender lists and identification of which documents are formally relied upon, particularly where tribunal books contain duplicates or multiple versions.

The practical purpose of these orders is to keep the hearing anchored to the statutory tests and to avoid a “document avalanche” obscuring the questions that must be answered.


Chapter 6: Hearing Scene: Ultimate Showdown of Evidence and Logic

Process Reconstruction: Live Restoration

The hearing’s defining feature was not theatrical cross-examination in the popular imagination. It was the slow, methodical pressure-testing of whether the Applicant’s documents actually proved what the Applicant said they proved.

A recurring line of forensic inquiry was simple: “Show where the experiment described in the registration application is recorded as conducted.”

When the Applicant relied on the H&F Test Manual and test forms as proof of experiments, the Tribunal’s analysis revealed a mismatch: even if tasks and actions in those materials occurred, the Tribunal found they did not align with the key aspects of the activity descriptions the Applicant registered for the relevant years.

The cross-examination also exposed the fragility of “testing” claims where the alleged testing record is sparse, undated, or at such a high level that it cannot verify what was done, when, and what outcomes were evaluated.

The hearing also scrutinised a document described as a test plan and test cases record for a later year. The Tribunal recorded that the Applicant’s witness confirmed that this document was the totality of evidence of testing of algorithms, and that the “testing” might be no more than reviewing a diagram or flow chart representing an algorithm. That concession mattered because the statutory scheme requires experimental activities whose outcomes are determined through systematic progression from hypothesis to experiment, observation, evaluation, and logical conclusions. A diagram review does not naturally demonstrate that chain.

Core Evidence Confrontation

The decisive evidence confrontation centred on:

  1. Whether the “R&D Manual” contained contemporaneous, year-specific evidence of experiments being conducted as registered, or whether it functioned mainly as a compilation of conceptual design and planning materials.
  2. Whether the test case documents were dated and linked to the relevant year, or whether they contained undated tables, material outside the relevant years, or content linked to other work.
  3. Whether expert criticism about missing software artefacts, absence of source code or executables, and signs of copied material undermined the Applicant’s claim that real experimentation and testing occurred.
Judicial Reasoning: How facts drove the result

The Tribunal’s reasoning was anchored in a gatekeeping sequence:

  1. First question: were the registered activities conducted in the relevant income year?
  2. Only if yes: do they meet the statutory definition of core or supporting R&D activities?

This sequencing matters because it prevents the Tribunal being drawn into assessing hypothetical work, intentions, or unregistered by-products. The Tribunal framed the proper approach as a year-by-year inquiry into evidence of activities as registered, with limited allowance for out-of-year material only where it “sheds light” on what occurred in-year.

“… it is important to note at the outset that the Tribunal is tasked with considering the income year in question and not the project as a whole, unless activities outside the year in question shed light on the activities in that year. Section 355-25 focuses on the activities, not the overall project.”

That statement was determinative because it shut the door on the Applicant’s attempt to win by describing the “totality” of a grand project. The Tribunal’s task was narrower and more forensic: prove the registered activity happened, in the registered year, in the way registered.


Chapter 7: Final Judgment of the Court

The Tribunal affirmed the decisions under review. In effect, the Respondent’s findings that the registered activities did not qualify for registration as core or supporting R&D activities remained in force, with the practical consequence that the Applicant could not rely on those activities for R&DTI purposes in the relevant years.

The Tribunal’s reasoning culminated in a clear bottom line: despite the volume of documentation, the Applicant did not prove that the activities were conducted as registered, and expert evidence contradicted the Applicant’s account. The Tribunal therefore affirmed each of the review decisions.


Chapter 8: In-depth Analysis of the Judgment: How Law and Evidence Lay the Foundation for Victory

Special Analysis

This decision’s jurisprudential value lies in its disciplined insistence on sequence and proof in R&DTI litigation. The Tribunal treated “conducted as registered” as a threshold. That approach prevents the statutory scheme being diluted into a reward for ambition, documentation volume, or post-hoc reconstruction.

The case also illustrates the Tribunal’s practical sensitivity to how R&D is claimed in software-adjacent projects. A project can look innovative at the level of concept, but the statutory definition demands evidence of experimental activity and systematic progression. The Tribunal’s analysis indicates that when claimants rely heavily on conceptual design, functional specifications, and high-level diagrams, they risk failing at the first hurdle: proving the experiments were actually done.

The decision further demonstrates a recurring litigation trap: treating “deliverables” as a proxy for experimentation. Deliverables can show planning, design intent, or capability, but without credible experimental records they may not prove uncertainty was resolved through experiment, observation, and evaluation.

Judgment Points
  1. The statutory inquiry is activity-centred, not product-centred.
    The Tribunal rejected the idea that a “world first” product claim can substitute for proving the registered activities were conducted in-year. The legal system rewards proof, not hype.

  2. Threshold finding: “not conducted” can end the case.
    The Tribunal concluded that none of the activities were conducted as registered in the relevant years, which meant it was not strictly necessary to decide every element of s 355-25. This is a powerful reminder: if your evidence does not prove conduct, you never reach the debate about novelty and uncertainty.

  3. Document volume does not equal evidentiary weight.
    The Tribunal acknowledged the large record but found inconsistencies, gaps, and misalignment with the registered activity descriptions.

  4. Expert evidence was not treated as decoration; it was treated as a test of credibility and methodology.
    The Tribunal preferred expert evidence that was reasoned and tied to the deficiencies in the Applicant’s records, especially where an expert identified missing artefacts or absence of repeatable procedures.

  5. Software and “AI” labels do not relax evidentiary standards.
    The Tribunal’s approach shows that describing software work with buzzwords cannot replace clear evidence of experiments, hypotheses, and evaluation.

Legal Basis

Key statutory anchors (AGLC4 style references retained in original English format):

  1. Income Tax Assessment Act 1997 (Cth) s 355-20 (R&D activities are core or supporting).
  2. Income Tax Assessment Act 1997 (Cth) s 355-25(1) (definition of core R&D activities: experimental activities; outcomes not determinable in advance; systematic progression; purpose of generating new knowledge).
  3. Income Tax Assessment Act 1997 (Cth) s 355-30(1) (supporting R&D activities must be directly related to core R&D activities).
  4. Industry Research and Development Act 1986 (Cth) s 27A (registration of specified activities conducted during the income year).
  5. Industry Research and Development Act 1986 (Cth) s 27J and s 27L (findings and retrospective effect on registration).
  6. Industry Research and Development Act 1986 (Cth) s 30D and s 30E (internal review and external review pathways).
Evidence Chain

At least eight “victory points” explaining how law plus evidence produced the outcome:

  1. Victory Point 1: The Tribunal imposed a disciplined order of analysis.
    The Tribunal treated “conducted as registered” as a preliminary fact question. This prevented the Applicant from shifting the debate into broad claims about innovation and project ambition. Practically, this means parties must prepare evidence maps that answer: What was done? When? Where is it recorded? How does that record align with the registered activity description?

  2. Victory Point 2: The Tribunal demanded alignment between registered descriptions and records.
    The Applicant’s registrations described targeted experiments with defined participant groups, variables, and outcomes. The Tribunal found key documentary records did not reflect those descriptions. The result was not merely “weak evidence”; it was evidence that pointed away from the registered story.

  3. Victory Point 3: Testing evidence collapsed under specificity.
    In technical disputes, vague claims of “we tested extensively” are vulnerable. Here, the Tribunal recorded that the evidence of testing of algorithms might be no more than review of diagrams or flow charts. That matters because the statutory test is not satisfied by design review. It requires experimental activity where results are determined by systematic progression from hypothesis to experiment, observation, evaluation, and logical conclusion.

  4. Victory Point 4: Undated or out-of-year materials cannot safely prove in-year conduct.
    The Tribunal treated undated tables, post-relevant-year diagrams, and documents with unclear provenance as weak proof of what happened in the relevant year. This reinforces a practical rule: contemporaneous record-keeping is not optional in R&DTI matters.

  5. Victory Point 5: Expert evidence functioned as a reality check on methodology and artefacts.
    The Tribunal preferred expert opinions that identified missing elements expected in genuine software development and in genuine nutrition and exercise science experimentation. This is not about credential contests; it is about whether an expert can explain, with reasons, why the records do or do not demonstrate experimentation and systematic progression.

  6. Victory Point 6: The Tribunal separated conceptualisation from experimentation.
    Conceptual design and functional specifications can be legitimate steps in development, but they are not automatically “experimental activities”. The Tribunal’s reasoning indicates that where the records predominantly show conceptual stage materials, they may not prove experimental conduct.

  7. Victory Point 7: Novelty claims were treated as subordinate to proof of conduct.
    Even if a claimant sincerely believes their product is new, the statutory scheme focuses on activities and evidence. The Tribunal’s approach is a warning: novelty rhetoric does not fill evidentiary gaps.

  8. Victory Point 8: Funding constraints explain, but do not excuse, failure of proof.
    The Tribunal was prepared to accept that funding constraints existed and affected progress. However, the statutory scheme still requires evidence of what was done, not what was hoped. This is a crucial lesson for startups relying on the R&DTI: if you cannot fund robust record-keeping, you may not be able to defend a claim later.

Judicial Original Quotation

The Tribunal’s decisive reasoning on the practical focus of the statutory scheme can be captured in the following statement:

“A difficulty for the Applicant in making its case is that the statutory scheme focusses on what the R&D entity actually did in relation to the activities specified in its registration application, not what the R&D entity hoped or intended to do.”

This passage was determinative because it clarifies the Tribunal’s lens: the law is conduct-based. Evidence must prove actual experimental activity occurred as registered, within the relevant year, and in a manner capable of verification.

Analysis of the Losing Party’s Failure

The Applicant’s failure can be reduced to an evidence architecture problem:

  1. The Applicant’s strongest narrative was “we built something ambitious”. The Tribunal needed “we conducted these registered experiments in this year, and here is the contemporaneous record showing hypothesis, test procedures, raw outcomes, evaluation, and conclusions”.
  2. The Applicant’s documentary strategy relied on compilations, summaries, and high-level materials. The Tribunal identified misalignment between these materials and the registered activity descriptions.
  3. Where testing evidence was expected to be the clincher, the record did not provide reliable, dated, repeatable experimental trails.
  4. The Applicant’s attempt to rely on “whole of project” framing could not overcome the statutory requirement that registration and eligibility are assessed by reference to specified activities conducted during the income year.
  5. Expert evidence that undermined the reliability of the Applicant’s documentation, and the absence of key artefacts, was fatal in a scheme that relies on proof and verification.
Implications

Five practical legal implications for the general public, expressed in warm but rigorous terms:

  1. Innovation is real, but proof is what the law can recognise. If you want a legal benefit tied to R&D, treat record-keeping as part of the invention itself.
  2. Write your experiments like you are writing for your future opponent. Assume that years later, someone will ask: “Show me exactly what you did, on what dates, with what inputs, and what results.”
  3. A big idea is not an experiment. The law rewards uncertainty resolved by disciplined testing, not uncertainty described in marketing language.
  4. If you cannot show the work, you may lose even if you did the work. Courts and tribunals decide on evidence, not on sincerity.
  5. Get disciplined early. The earlier you design your documentation around the statutory test, the safer your position tends to be if your claim is later examined.
Q&A Session
  1. Q: Does this decision mean software projects can never qualify for the R&D Tax Incentive?
    A: No. The decision shows that software-related claims can be vulnerable if evidence does not demonstrate experimental activities and systematic progression. A software project can still qualify where there is genuine technical uncertainty and robust evidence of testing, evaluation, and conclusions.

  2. Q: If registration is initially approved, does that guarantee entitlement to the tax offset?
    A: No. Registration is a precondition, not a guarantee. The administrator can later examine registrations and make findings that retrospectively affect what is treated as registered. That is why contemporaneous records are critical.

  3. Q: What is the single biggest practical takeaway for claimants?
    A: Maintain records that match the registered activity descriptions. If you claim experiments with defined variables and outcomes, your records must show those experiments being conducted, not merely planned.


Appendix: Reference for Comparable Case Judgments and Practical Guidelines

1. Practical Positioning of This Case

Case Subtype: R&D Tax Incentive Registration and Eligibility Dispute (Division 355 ITAA 1997; IRD Act findings review)
Judgment Nature Definition: Final Judgment (Merits review decision affirming the reviewable decisions)

2. Self-examination of Core Statutory Elements

[Execution Instruction Applied]: The following is a practical checklist aligned to the statutory architecture of this dispute. It is for reference only. Outcomes tend to be fact-sensitive and depend on the quality of contemporaneous evidence.

Core Test Standard: Eligibility Architecture for R&DTI Claims

Step 1: Identify the “R&D entity” and the relevant income year
– Confirm the entity falls within the statutory definition (for example, an Australian-incorporated body corporate) and identify the precise income year being claimed.

Step 2: Confirm registration of specified activities for that income year
– Ensure activities were registered under s 27A of the Industry Research and Development Act 1986 (Cth) for the income year.
– Ensure each supporting activity is tied to a specified core activity as required by the IRD Act framework.

Step 3: Prove the activities were conducted as registered in that income year
– Prepare an evidence map linking each registered activity description to contemporaneous records showing:
– who did the work,
– where it occurred,
– when it occurred, and
– what was actually done.
– Risk warning: where documents are undated, summary-only, version-inconsistent, or not aligned to the registered descriptions, it tends to be harder to prove conduct.

Step 4: For each claimed core R&D activity: apply s 355-25(1) ITAA 1997

4A. Outcome cannot be known or determined in advance
– Identify the specific technical uncertainty.
– Explain why competent professionals could not determine the outcome in advance on the basis of existing knowledge, information, or experience.
– Show how experimentation was necessary to resolve that uncertainty.
– Risk warning: if the work tends to be characterised as applying known techniques, routine engineering, or predictable implementation, the uncertainty element can be difficult to establish.

4B. Systematic progression of work based on principles of established science
– Show hypothesis → experiment → observation → evaluation → logical conclusions.
– Provide raw or contemporaneous experimental records: test plans with dates, test cases executed, data outputs, evaluation notes, and conclusion statements tied back to the hypothesis.
– Risk warning: purely conceptual documents or post-hoc narratives often struggle to demonstrate this progression.

4C. Purpose of generating new knowledge
– Identify what knowledge is new, and how it differs from what existed in the field.
– Show the novelty is in the knowledge generated by experimentation, not merely in combining known elements.
– Risk warning: novelty assertions without evidence of experimental generation tend to be less persuasive.

Step 5: For each supporting R&D activity: apply s 355-30(1) ITAA 1997
– Prove the activity is directly related to an eligible core R&D activity.
– Demonstrate a direct, close, and relatively immediate relationship, rather than a general relationship to the broader project.
– Risk warning: if no eligible core activity is proven, supporting activities often fall away.

Step 6: Exclusions check
– Identify whether the activity is excluded by s 355-25(2) (for example, market research, market testing, or market development).
– Risk warning: if documentation mainly shows market surveys, competitor reviews, or commercial planning, the activity may be treated as excluded.

Step 7: Expenditure linkage (practical tax consequence step)
– Ensure claimed expenditure is incurred on registered R&D activities and is properly documented.
– Risk warning: even where activities qualify, poor expenditure tracing can create further dispute risk.

3. Equitable Remedies and Alternative Claims

[Execution Instruction Applied]: The statutory scheme may not yield relief. The following alternatives may sometimes be explored, depending on facts and jurisdiction. These are not guarantees and tend to be determined case-by-case.

Procedural Fairness
  • If a decision-maker fails to provide a fair opportunity to be heard, relies on undisclosed adverse material, or demonstrates apprehended bias, judicial review pathways may sometimes be explored.
  • In this dispute type, a party might investigate whether the process involved procedural unfairness, but must also recognise that merits review tribunals often focus on the correct or preferable decision on the evidence.
Ancillary Claims
  • If a claimant believes they suffered loss due to administrative conduct, compensation pathways may exist in other forums, but tribunals reviewing R&DTI registration findings typically cannot award damages for economic loss arising from refusal of offsets.
  • Commercial avenues may sometimes include renegotiation of RSP contracts or internal governance reforms to prevent future compliance failures.
Unjust Enrichment / Restitution (limited relevance in this context)
  • In some scenarios involving third parties, a claimant may consider whether funds were retained without basis, but this tends to sit outside the core statutory dispute and requires careful legal analysis.

4. Access Thresholds and Exceptional Circumstances

[Execution Instruction Applied]: This dispute type includes hard procedural and evidentiary thresholds.

Regular Thresholds
  • Registration must be made within the required timeframes and in the approved form under the IRD Act.
  • Activities must be specified for the income year and then proven to have been conducted in that year.
  • Core R&D activities must meet s 355-25(1) and not be excluded by s 355-25(2).
  • Supporting activities must be directly related to eligible core activities under s 355-30(1).
Exceptional Channels
  • Where records are incomplete, some claimants attempt to rely on later-year work to “shed light” on earlier-year activities. This can sometimes assist, but tends to be limited, and will rarely overcome a complete lack of contemporaneous evidence aligned to registered descriptions.
  • Where an entity’s activities evolved across years, careful drafting of registrations and meticulous year-by-year record retention can reduce the risk of later findings that activities were not conducted as registered.

Suggestion: Do not abandon a potential claim simply because you think your project was innovative. Compare your documentation and experimental records against the statutory elements. Where gaps exist, seek professional guidance early, because reconstruction years later tends to be a relatively high-risk strategy.

5. Guidelines for Judicial and Legal Citation

Citation Angle

It is recommended to cite this case in submissions involving:
– the threshold requirement that an applicant prove activities were conducted as registered in the relevant income year;
– the limits of “whole of project” framing in Division 355 disputes;
– evidentiary standards for proving experimentation and systematic progression in software-adjacent R&D claims.

Citation Method

As Positive Support:
– Where your matter involves detailed contemporaneous records mapping registered activities to experiments, this case can be used to emphasise that disciplined evidence alignment is decisive.

As a Distinguishing Reference:
– If the opposing party cites this case to argue your claim fails, you can distinguish it by demonstrating:
– robust dated test execution records,
– raw data and evaluation notes,
– clear technical uncertainty not determinable in advance, and
– a transparent chain from hypothesis to conclusions.

Anonymisation Rule: Use Applicant / Respondent in narrative contexts; cite the authority as published where necessary for formal citation.

Reference to Comparable Authorities
  1. Moreton Resources Limited and Innovation and Science Australia [2019] FCAFC 120
    Ratio Summary: Emphasises the statutory focus on registered activities and the importance of applying the legislative criteria to the activities claimed, rather than broad project narratives.

  2. Coal of Queensland Pty Ltd v Innovation and Science Australia [2021] FCAFC 54
    Ratio Summary: Addresses the proper construction of R&D activity concepts within the statutory framework and cautions against approaches that lose sight of the statutory language.

  3. Absolute Vision Technologies Pty Limited and Innovation and Science Australia (Taxation) [2022] AATA 2319
    Ratio Summary: Illustrates tribunal treatment of registered activities and the need for applicants to prove that activities satisfy statutory criteria rather than relying on general innovation assertions.

  4. Body by Michael Pty Ltd and Industry Innovation and Science Australia [2025] ARTA 44
    Ratio Summary: Reinforces the year-specific nature of the inquiry and the difficulty created when evidence focuses on the overall project rather than what was done in the particular income year.

####### Conclusion

This decision teaches a disciplined legal truth: the R&D Tax Incentive is not a prize for ambition. It is a statutory benefit for proven experimental activities conducted as registered, supported by records that allow verification.

Everyone needs to understand the law and see the world through the lens of law. The in-depth analysis of this authentic judgment is intended to help everyone gradually establish a new legal mindset: True self-protection stems from the early understanding and mastery of legal rules.

####### Disclaimer

This article is based on the study and analysis of the public judgment of the Administrative Review Tribunal ([Applicant and Respondent (Taxation) [2025] ARTA 1813]), aimed at promoting legal research and public understanding. The citation of relevant judgment content is limited to the scope of fair dealing for the purposes of legal research, comment, and information sharing.

The analysis, structural arrangement, and expression of views contained in this article are the original content of the author, and the copyright belongs to the author and this platform. This article does not constitute legal advice, nor should it be regarded as legal advice for any specific situation.


Original Case File:

👉 Can’t see the full document?
Click here to download the original judgment document.

Tags


Your Attractive Heading