Enterprises are facing more pressure to realize clear customer insight as people move between apps, web sites, and stores. Old-style segmentation doesn’t help much anymore. For CIOs and CMOs attempting to tailor services at scale, using models that read intent from behaviour is becoming a practical way forward.
NTT and NTT DOCOMO recently developed a Large Action Model (LAM) that moves past easy demographic labels and examines the order of every customer’s actions. While the concept first seems tied to marketing, the identical approach can support other areas, akin to patient care or energy planning, where time and sequence shape the end result.
Why this matters
Customer activity now comes from many places, each producing various kinds of data. Apps generate constant logs, while in-store systems collect slower, more structured details like purchases or payment methods. Many organisations still find it hard to mix these streams right into a single view that supports customer insight and private outreach.
This gap affects sales, increases operating costs, and forces teams to depend on guesswork. LAMs try to resolve this by taking note of the order and context of every motion. This allows faster decisions, higher timing, and more relevant contact with customers.
What NTT and DOCOMO created
DOCOMO built a platform that organises customer information using an easy “4W1H” structure: who did what, when, where, and the way. NTT developed a model that learns patterns in time-series data, handling each numbers and categories. Together, the system predicts what a customer may do next and identifies who’s probably to answer outreach.
The model pays close attention to the sequence of events. For example:
- A call followed by browsing and a purchase order suggests the decision created awareness.
- Browsing, then a call, then a purchase order may show the client wanted more clarity.
- A call after a purchase order may point to a support need.
Because the system reads actions in context, its intent scoring becomes more accurate.
Training the model was also efficient—DOCOMO used eight NVIDIA A100 GPUs and finished training in under a day, around 145 GPU hours. This is way lower than the demands of enormous language models, making it more practical for organisations that want advanced modelling without high infrastructure costs.
How it was used
DOCOMO tested the model in its telemarketing work. By rating customers by how helpful outreach is likely to be, the corporate doubled the order rate for mobile and smart-life services in comparison with older methods.
Customer interviews showed that timing was key. Some people couldn’t visit a store attributable to childcare, while others were unsure about switching plans. The model helped discover the precise moment to contact them, as a substitute of counting on broad campaign cycles.
This approach has wider implications:
- Operations: Staff can deal with fewer but more meaningful conversations.
- Efficiency: AI reduces outreach that customers don’t want.
- Governance: Consistent time-series data provides a clearer record of selections.
- Platform alignment: The model can run alongside cloud-AI platforms akin to AWS Bedrock, Azure AI Foundry, or Google Vertex AI.
Use in healthcare and energy
The same method applies to other fields where timing matters. In healthcare, medical records capture long patterns in symptoms and coverings. The order during which these appear can affect care plans. NTT is testing LAMs to assist support diabetes treatment by studying how conditions progress.
In the energy sector, weather data affects solar generation. Sensors on the bottom and in satellites track patterns that move over time. LAMs may help operators predict sunlight levels and adjust generation and trading decisions.
These examples show why chief data, operations, and risk officers may soon take a look at LAM-style tools to enhance customer insight and guide higher decisions.
What enterprises should take into consideration
Rolling out an intent-prediction model isn’t only a technical task. It will depend on data quality, team alignment, and clear oversight. Common issues include:
- Data unification: Many organisations still have scattered time-series data that should be mapped right into a shared structure.
- Model oversight: Because predictions affect revenue and customer trust, teams need clear ways to review how the model makes decisions.
- Culture: Staff need confidence in AI-driven prioritisation. Without that, adoption slows.
- Infrastructure: Even though LAMs cost lower than large language models, they still require planning around training, storage, cloud use, and security.
By constructing strong data foundations and clear goals, organisations can use models like this to enhance customer engagement, strengthen customer insight, and apply the identical considering across other areas of the business.
(Photo by Lukas Blazek)
See also: Why leading brands are moving to SaaS marketing mix modelling
Want to learn more about AI and large data from industry leaders? Check out AI & Big Data Expo happening in Amsterdam, California, and London. The comprehensive event is an element of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Read the complete article here









