|
When consuming Gazelle dataset via Gazelle Web Application or Snowflake, I want to get consistently fresh and comprehensive contact information.
Delivering a solution to this scale would give customers reliable and comprehensive contact information in a way we’ve not ever delivered.
The automation would greatly reduce manual efforts and the ability to have multiple sources of profiles deduplicated/merged gives greater stability to vendor reliance.
Gazelle Client who needs to get the data on contacts working in selected companies
Gazelle Web Application
The People Pipeline allows sourcing of new contacts for companies produced as gazelle golden records (NeoETL pipelines output).
The People Pipeline is implemented as a series of Databricks notebooks made available for external orchestration in terms of producing a file that can be sent off for translation. It is optimized for regular re-running.
Part of the assessment of choosing the highest quality contact data includes a review of PDL as an additional or substitute source
What is explicitly not a part of this epic? List things that have been discussed but will not be included. Things you imagine in a phase 2, etc.
<FigmaLink>
Consider performance characteristics, privacy/security implications, localization requirements, mobile requirements, accessibility requirements
Is there any work that must precede this? Feature work? Ops work?
Just answer yes or no.
Initial rollout to [internal employees|sales demos|1-2 specific beta customers|all customers]
If specific beta customers, will it be for a specific survey launch date or report availability date
How will this guide the rollout of individual stories in the epic?
The rollout strategy should be discussed with CS, Marketing, and Sales.
How long we would tolerate having a “partial rollout” -- rolled out to some customers but not all
Focus on risks unique to this feature, not overall delivery/execution risks.
Scope the steps still to be completed to have a fully operational and scalable solution in place (including integration with email sourcing solution and pinging microservice)
Identify opportunities for Lightcast teams to offload some of the work in this pipeline, including for QAvScope the steps still to be completed to have a fully operational and scalable solution in place (including integration with email sourcing solution and pinging microservice)
|
Are there direct costs that this feature entails? Dataset acquisition, server purchasing, software licenses, etc.?
Each team involved should give a general t-shirt size estimate of their work involved. As the epic proceeds, they can add a link to the Jira epic/issue associated with their portion of this work.
Team | Effort Estimate (T-shirt sizes) | Jira Link |
---|---|---|
Gazelle-Micro/Devops | S | Type /jira to create a link to an epic or issue |
Gazelle-ETL | S |